Test Report: Docker_Linux_crio 17194

                    
                      03b3a1191a73942c676aa26934a5795f62561627:2023-09-12:30988
                    
                

Test fail (6/298)

Order failed test Duration
25 TestAddons/parallel/Ingress 151.25
154 TestIngressAddonLegacy/serial/ValidateIngressAddons 182.38
204 TestMultiNode/serial/PingHostFrom2Pods 3.07
225 TestRunningBinaryUpgrade 61.24
239 TestStoppedBinaryUpgrade/Upgrade 92.42
263 TestPause/serial/SecondStartNoReconfiguration 59.91
x
+
TestAddons/parallel/Ingress (151.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-348433 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-348433 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-348433 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4358a2c7-58a2-42fa-80b7-ea466a23ffe1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4358a2c7-58a2-42fa-80b7-ea466a23ffe1] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.008315494s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-348433 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.163201031s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-348433 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-348433 addons disable ingress-dns --alsologtostderr -v=1: (1.203618668s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-348433 addons disable ingress --alsologtostderr -v=1: (7.577496714s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-348433
helpers_test.go:235: (dbg) docker inspect addons-348433:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c4161df61a631bc2b014ff180c9aee9cfdcc4df8637041cee1945bdc8271aa8d",
	        "Created": "2023-09-12T21:44:05.036925209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 24253,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-12T21:44:05.318872964Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0508862d812894c98deaaf3533e6d3386b479df1d249d4410a6247f1f44ad45d",
	        "ResolvConfPath": "/var/lib/docker/containers/c4161df61a631bc2b014ff180c9aee9cfdcc4df8637041cee1945bdc8271aa8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c4161df61a631bc2b014ff180c9aee9cfdcc4df8637041cee1945bdc8271aa8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/c4161df61a631bc2b014ff180c9aee9cfdcc4df8637041cee1945bdc8271aa8d/hosts",
	        "LogPath": "/var/lib/docker/containers/c4161df61a631bc2b014ff180c9aee9cfdcc4df8637041cee1945bdc8271aa8d/c4161df61a631bc2b014ff180c9aee9cfdcc4df8637041cee1945bdc8271aa8d-json.log",
	        "Name": "/addons-348433",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-348433:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-348433",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1fbebe8c7d0cb346a335903ccfda8376182f7b26b9e03986f992f21497acbe03-init/diff:/var/lib/docker/overlay2/27d59bddd44498ba277aabbca5bbef44e363739d94cbe3a544670a142640c048/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fbebe8c7d0cb346a335903ccfda8376182f7b26b9e03986f992f21497acbe03/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fbebe8c7d0cb346a335903ccfda8376182f7b26b9e03986f992f21497acbe03/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fbebe8c7d0cb346a335903ccfda8376182f7b26b9e03986f992f21497acbe03/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-348433",
	                "Source": "/var/lib/docker/volumes/addons-348433/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-348433",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-348433",
	                "name.minikube.sigs.k8s.io": "addons-348433",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7a25a3290c7397167947fc4052ab1c7df806710784b2cfb4e4234316c1d7e9be",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/7a25a3290c73",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-348433": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "c4161df61a63",
	                        "addons-348433"
	                    ],
	                    "NetworkID": "09393a5b53ecee0e28e45dd02d8aa47614f495f3097792953137fa0f46e96f64",
	                    "EndpointID": "51788cb2bcd6f68d4dc6f68ed16fd13c4f3c57424ba9982649cd7462df20aeba",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-348433 -n addons-348433
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-348433 logs -n 25: (1.101075803s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-358025   | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC |                     |
	|         | -p download-only-358025        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-358025   | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC |                     |
	|         | -p download-only-358025        |                        |         |         |                     |                     |
	|         | --force --alsologtostderr      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | --all                          | minikube               | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC | 12 Sep 23 21:43 UTC |
	| delete  | -p download-only-358025        | download-only-358025   | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC | 12 Sep 23 21:43 UTC |
	| delete  | -p download-only-358025        | download-only-358025   | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC | 12 Sep 23 21:43 UTC |
	| start   | --download-only -p             | download-docker-828900 | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC |                     |
	|         | download-docker-828900         |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p download-docker-828900      | download-docker-828900 | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC | 12 Sep 23 21:43 UTC |
	| start   | --download-only -p             | binary-mirror-484613   | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC |                     |
	|         | binary-mirror-484613           |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --binary-mirror                |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45871         |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-484613        | binary-mirror-484613   | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC | 12 Sep 23 21:43 UTC |
	| start   | -p addons-348433               | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC | 12 Sep 23 21:45 UTC |
	|         | --wait=true --memory=4000      |                        |         |         |                     |                     |
	|         | --alsologtostderr              |                        |         |         |                     |                     |
	|         | --addons=registry              |                        |         |         |                     |                     |
	|         | --addons=metrics-server        |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                        |         |         |                     |                     |
	|         | --addons=gcp-auth              |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                        |         |         |                     |                     |
	|         | --driver=docker                |                        |         |         |                     |                     |
	|         | --container-runtime=crio       |                        |         |         |                     |                     |
	|         | --addons=ingress               |                        |         |         |                     |                     |
	|         | --addons=ingress-dns           |                        |         |         |                     |                     |
	|         | --addons=helm-tiller           |                        |         |         |                     |                     |
	| addons  | enable headlamp                | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:45 UTC | 12 Sep 23 21:45 UTC |
	|         | -p addons-348433               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-348433 addons disable   | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:45 UTC | 12 Sep 23 21:45 UTC |
	|         | helm-tiller --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| ip      | addons-348433 ip               | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:45 UTC | 12 Sep 23 21:45 UTC |
	| addons  | addons-348433 addons disable   | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:45 UTC | 12 Sep 23 21:45 UTC |
	|         | registry --alsologtostderr     |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-348433 addons           | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:45 UTC | 12 Sep 23 21:45 UTC |
	|         | disable metrics-server         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ssh     | addons-348433 ssh curl -s      | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:45 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                        |         |         |                     |                     |
	|         | nginx.example.com'             |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:45 UTC | 12 Sep 23 21:45 UTC |
	|         | addons-348433                  |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:45 UTC | 12 Sep 23 21:45 UTC |
	|         | addons-348433                  |                        |         |         |                     |                     |
	| addons  | addons-348433 addons           | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:46 UTC | 12 Sep 23 21:46 UTC |
	|         | disable csi-hostpath-driver    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| addons  | addons-348433 addons           | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:46 UTC | 12 Sep 23 21:46 UTC |
	|         | disable volumesnapshots        |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                        |         |         |                     |                     |
	| ip      | addons-348433 ip               | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:47 UTC | 12 Sep 23 21:47 UTC |
	| addons  | addons-348433 addons disable   | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:47 UTC | 12 Sep 23 21:48 UTC |
	|         | ingress-dns --alsologtostderr  |                        |         |         |                     |                     |
	|         | -v=1                           |                        |         |         |                     |                     |
	| addons  | addons-348433 addons disable   | addons-348433          | jenkins | v1.31.2 | 12 Sep 23 21:48 UTC | 12 Sep 23 21:48 UTC |
	|         | ingress --alsologtostderr -v=1 |                        |         |         |                     |                     |
	|---------|--------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 21:43:40
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:43:40.966585   23586 out.go:296] Setting OutFile to fd 1 ...
	I0912 21:43:40.966680   23586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:43:40.966688   23586 out.go:309] Setting ErrFile to fd 2...
	I0912 21:43:40.966692   23586 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:43:40.966846   23586 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 21:43:40.967382   23586 out.go:303] Setting JSON to false
	I0912 21:43:40.968111   23586 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5169,"bootTime":1694549852,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:43:40.968173   23586 start.go:138] virtualization: kvm guest
	I0912 21:43:40.969883   23586 out.go:177] * [addons-348433] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:43:40.971497   23586 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 21:43:40.973223   23586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:43:40.971525   23586 notify.go:220] Checking for updates...
	I0912 21:43:40.974427   23586 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 21:43:40.975658   23586 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 21:43:40.976764   23586 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:43:40.977803   23586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:43:40.978889   23586 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 21:43:40.999021   23586 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 21:43:40.999114   23586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:43:41.049028   23586 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-09-12 21:43:41.040830798 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:43:41.049116   23586 docker.go:294] overlay module found
	I0912 21:43:41.050583   23586 out.go:177] * Using the docker driver based on user configuration
	I0912 21:43:41.051905   23586 start.go:298] selected driver: docker
	I0912 21:43:41.051923   23586 start.go:902] validating driver "docker" against <nil>
	I0912 21:43:41.051937   23586 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:43:41.052913   23586 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:43:41.101416   23586 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:39 SystemTime:2023-09-12 21:43:41.094183353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:43:41.101591   23586 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 21:43:41.101839   23586 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:43:41.103053   23586 out.go:177] * Using Docker driver with root privileges
	I0912 21:43:41.104184   23586 cni.go:84] Creating CNI manager for ""
	I0912 21:43:41.104209   23586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 21:43:41.104225   23586 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 21:43:41.104238   23586 start_flags.go:321] config:
	{Name:addons-348433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-348433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 21:43:41.105508   23586 out.go:177] * Starting control plane node addons-348433 in cluster addons-348433
	I0912 21:43:41.106579   23586 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 21:43:41.107693   23586 out.go:177] * Pulling base image ...
	I0912 21:43:41.108839   23586 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 21:43:41.108869   23586 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0912 21:43:41.108881   23586 cache.go:57] Caching tarball of preloaded images
	I0912 21:43:41.108939   23586 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 21:43:41.108972   23586 preload.go:174] Found /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 21:43:41.108989   23586 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0912 21:43:41.109308   23586 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/config.json ...
	I0912 21:43:41.109335   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/config.json: {Name:mk425630ad78fe8488bde0051265b28f9e572623 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:43:41.124051   23586 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 to local cache
	I0912 21:43:41.124146   23586 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local cache directory
	I0912 21:43:41.124160   23586 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local cache directory, skipping pull
	I0912 21:43:41.124163   23586 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in cache, skipping pull
	I0912 21:43:41.124170   23586 cache.go:153] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 as a tarball
	I0912 21:43:41.124177   23586 cache.go:163] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 from local cache
	I0912 21:43:51.990641   23586 cache.go:165] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 from cached tarball
	I0912 21:43:51.990678   23586 cache.go:195] Successfully downloaded all kic artifacts
	I0912 21:43:51.990723   23586 start.go:365] acquiring machines lock for addons-348433: {Name:mk533d24913a7a00ab5b98343cdea146ee46eb81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:43:51.990817   23586 start.go:369] acquired machines lock for "addons-348433" in 73.37µs
	I0912 21:43:51.990846   23586 start.go:93] Provisioning new machine with config: &{Name:addons-348433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-348433 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:43:51.990922   23586 start.go:125] createHost starting for "" (driver="docker")
	I0912 21:43:51.992400   23586 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0912 21:43:51.992667   23586 start.go:159] libmachine.API.Create for "addons-348433" (driver="docker")
	I0912 21:43:51.992692   23586 client.go:168] LocalClient.Create starting
	I0912 21:43:51.992791   23586 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem
	I0912 21:43:52.423981   23586 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem
	I0912 21:43:52.629385   23586 cli_runner.go:164] Run: docker network inspect addons-348433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 21:43:52.644412   23586 cli_runner.go:211] docker network inspect addons-348433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 21:43:52.644474   23586 network_create.go:281] running [docker network inspect addons-348433] to gather additional debugging logs...
	I0912 21:43:52.644491   23586 cli_runner.go:164] Run: docker network inspect addons-348433
	W0912 21:43:52.658553   23586 cli_runner.go:211] docker network inspect addons-348433 returned with exit code 1
	I0912 21:43:52.658577   23586 network_create.go:284] error running [docker network inspect addons-348433]: docker network inspect addons-348433: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-348433 not found
	I0912 21:43:52.658592   23586 network_create.go:286] output of [docker network inspect addons-348433]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-348433 not found
	
	** /stderr **
	I0912 21:43:52.658630   23586 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:43:52.673585   23586 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e60320}
	I0912 21:43:52.673616   23586 network_create.go:123] attempt to create docker network addons-348433 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0912 21:43:52.673650   23586 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-348433 addons-348433
	I0912 21:43:52.722622   23586 network_create.go:107] docker network addons-348433 192.168.49.0/24 created
	I0912 21:43:52.722654   23586 kic.go:117] calculated static IP "192.168.49.2" for the "addons-348433" container
	I0912 21:43:52.722714   23586 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 21:43:52.738197   23586 cli_runner.go:164] Run: docker volume create addons-348433 --label name.minikube.sigs.k8s.io=addons-348433 --label created_by.minikube.sigs.k8s.io=true
	I0912 21:43:52.754012   23586 oci.go:103] Successfully created a docker volume addons-348433
	I0912 21:43:52.754082   23586 cli_runner.go:164] Run: docker run --rm --name addons-348433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348433 --entrypoint /usr/bin/test -v addons-348433:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib
	I0912 21:43:59.939528   23586 cli_runner.go:217] Completed: docker run --rm --name addons-348433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348433 --entrypoint /usr/bin/test -v addons-348433:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib: (7.185409082s)
	I0912 21:43:59.939553   23586 oci.go:107] Successfully prepared a docker volume addons-348433
	I0912 21:43:59.939574   23586 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 21:43:59.939598   23586 kic.go:190] Starting extracting preloaded images to volume ...
	I0912 21:43:59.939723   23586 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-348433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 21:44:04.973046   23586 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-348433:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir: (5.033260174s)
	I0912 21:44:04.973075   23586 kic.go:199] duration metric: took 5.033475 seconds to extract preloaded images to volume
	W0912 21:44:04.973195   23586 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0912 21:44:04.973288   23586 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 21:44:05.022921   23586 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-348433 --name addons-348433 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-348433 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-348433 --network addons-348433 --ip 192.168.49.2 --volume addons-348433:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0912 21:44:05.326018   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Running}}
	I0912 21:44:05.342380   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:05.359545   23586 cli_runner.go:164] Run: docker exec addons-348433 stat /var/lib/dpkg/alternatives/iptables
	I0912 21:44:05.399946   23586 oci.go:144] the created container "addons-348433" has a running status.
	I0912 21:44:05.399971   23586 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa...
	I0912 21:44:05.496624   23586 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 21:44:05.515414   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:05.530838   23586 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 21:44:05.530860   23586 kic_runner.go:114] Args: [docker exec --privileged addons-348433 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 21:44:05.617599   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:05.633667   23586 machine.go:88] provisioning docker machine ...
	I0912 21:44:05.633696   23586 ubuntu.go:169] provisioning hostname "addons-348433"
	I0912 21:44:05.633755   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:05.650956   23586 main.go:141] libmachine: Using SSH client type: native
	I0912 21:44:05.651978   23586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0912 21:44:05.652012   23586 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-348433 && echo "addons-348433" | sudo tee /etc/hostname
	I0912 21:44:05.653538   23586 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51144->127.0.0.1:32772: read: connection reset by peer
	I0912 21:44:08.798229   23586 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-348433
	
	I0912 21:44:08.798301   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:08.813941   23586 main.go:141] libmachine: Using SSH client type: native
	I0912 21:44:08.814357   23586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0912 21:44:08.814389   23586 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-348433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-348433/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-348433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:44:08.948420   23586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:44:08.948445   23586 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17194-15878/.minikube CaCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17194-15878/.minikube}
	I0912 21:44:08.948473   23586 ubuntu.go:177] setting up certificates
	I0912 21:44:08.948483   23586 provision.go:83] configureAuth start
	I0912 21:44:08.948533   23586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348433
	I0912 21:44:08.964113   23586 provision.go:138] copyHostCerts
	I0912 21:44:08.964185   23586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem (1082 bytes)
	I0912 21:44:08.964286   23586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem (1123 bytes)
	I0912 21:44:08.964345   23586 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem (1679 bytes)
	I0912 21:44:08.964389   23586 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem org=jenkins.addons-348433 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube addons-348433]
	I0912 21:44:09.043755   23586 provision.go:172] copyRemoteCerts
	I0912 21:44:09.043811   23586 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:44:09.043841   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:09.059684   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:09.156406   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:44:09.175962   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:44:09.195524   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0912 21:44:09.214564   23586 provision.go:86] duration metric: configureAuth took 266.066535ms
	I0912 21:44:09.214593   23586 ubuntu.go:193] setting minikube options for container-runtime
	I0912 21:44:09.214760   23586 config.go:182] Loaded profile config "addons-348433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 21:44:09.214866   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:09.230695   23586 main.go:141] libmachine: Using SSH client type: native
	I0912 21:44:09.231025   23586 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32772 <nil> <nil>}
	I0912 21:44:09.231050   23586 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:44:09.446061   23586 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:44:09.446084   23586 machine.go:91] provisioned docker machine in 3.81239992s
	I0912 21:44:09.446094   23586 client.go:171] LocalClient.Create took 17.453395636s
	I0912 21:44:09.446114   23586 start.go:167] duration metric: libmachine.API.Create for "addons-348433" took 17.453448075s
	I0912 21:44:09.446123   23586 start.go:300] post-start starting for "addons-348433" (driver="docker")
	I0912 21:44:09.446141   23586 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:44:09.446214   23586 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:44:09.446259   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:09.461828   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:09.556519   23586 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:44:09.559178   23586 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 21:44:09.559222   23586 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 21:44:09.559232   23586 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 21:44:09.559241   23586 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0912 21:44:09.559250   23586 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/addons for local assets ...
	I0912 21:44:09.559303   23586 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/files for local assets ...
	I0912 21:44:09.559325   23586 start.go:303] post-start completed in 113.191871ms
	I0912 21:44:09.559561   23586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348433
	I0912 21:44:09.576216   23586 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/config.json ...
	I0912 21:44:09.576438   23586 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 21:44:09.576474   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:09.592251   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:09.688886   23586 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 21:44:09.692508   23586 start.go:128] duration metric: createHost completed in 17.701574004s
	I0912 21:44:09.692528   23586 start.go:83] releasing machines lock for "addons-348433", held for 17.701697792s
	I0912 21:44:09.692607   23586 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-348433
	I0912 21:44:09.708422   23586 ssh_runner.go:195] Run: cat /version.json
	I0912 21:44:09.708480   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:09.708486   23586 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:44:09.708542   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:09.725633   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:09.726557   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:09.922118   23586 ssh_runner.go:195] Run: systemctl --version
	I0912 21:44:09.925805   23586 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:44:10.059004   23586 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 21:44:10.062886   23586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:44:10.078662   23586 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0912 21:44:10.078754   23586 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:44:10.102269   23586 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0912 21:44:10.102296   23586 start.go:469] detecting cgroup driver to use...
	I0912 21:44:10.102332   23586 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0912 21:44:10.102388   23586 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:44:10.114787   23586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:44:10.123733   23586 docker.go:196] disabling cri-docker service (if available) ...
	I0912 21:44:10.123784   23586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:44:10.135084   23586 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:44:10.146826   23586 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:44:10.225079   23586 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:44:10.304833   23586 docker.go:212] disabling docker service ...
	I0912 21:44:10.304899   23586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:44:10.321942   23586 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:44:10.332211   23586 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:44:10.404426   23586 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:44:10.481187   23586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:44:10.490486   23586 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:44:10.503425   23586 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0912 21:44:10.503494   23586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:44:10.511530   23586 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:44:10.511582   23586 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:44:10.519463   23586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:44:10.527086   23586 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:44:10.534803   23586 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:44:10.542201   23586 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:44:10.548958   23586 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:44:10.555729   23586 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:44:10.631366   23586 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:44:10.734908   23586 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:44:10.734975   23586 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:44:10.737976   23586 start.go:537] Will wait 60s for crictl version
	I0912 21:44:10.738017   23586 ssh_runner.go:195] Run: which crictl
	I0912 21:44:10.740572   23586 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:44:10.771652   23586 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0912 21:44:10.771763   23586 ssh_runner.go:195] Run: crio --version
	I0912 21:44:10.804235   23586 ssh_runner.go:195] Run: crio --version
	I0912 21:44:10.837199   23586 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0912 21:44:10.838505   23586 cli_runner.go:164] Run: docker network inspect addons-348433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:44:10.853973   23586 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0912 21:44:10.857184   23586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:44:10.866430   23586 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 21:44:10.866480   23586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:44:10.913616   23586 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 21:44:10.913635   23586 crio.go:415] Images already preloaded, skipping extraction
	I0912 21:44:10.913672   23586 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:44:10.942599   23586 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 21:44:10.942616   23586 cache_images.go:84] Images are preloaded, skipping loading
	I0912 21:44:10.942675   23586 ssh_runner.go:195] Run: crio config
	I0912 21:44:10.980744   23586 cni.go:84] Creating CNI manager for ""
	I0912 21:44:10.980763   23586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 21:44:10.980780   23586 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 21:44:10.980796   23586 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-348433 NodeName:addons-348433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 21:44:10.980925   23586 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-348433"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:44:10.980990   23586 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=addons-348433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-348433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 21:44:10.981035   23586 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 21:44:10.988566   23586 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:44:10.988635   23586 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:44:10.995792   23586 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0912 21:44:11.010158   23586 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 21:44:11.024513   23586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0912 21:44:11.038682   23586 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0912 21:44:11.041512   23586 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:44:11.050257   23586 certs.go:56] Setting up /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433 for IP: 192.168.49.2
	I0912 21:44:11.050286   23586 certs.go:190] acquiring lock for shared ca certs: {Name:mk61327f1fa12512fba6a15661f030034d23bf2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.050405   23586 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key
	I0912 21:44:11.242385   23586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt ...
	I0912 21:44:11.242413   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt: {Name:mka8b5aa5d00310bbd3a58fce2698d8c604e6f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.242598   23586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key ...
	I0912 21:44:11.242618   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key: {Name:mkede97c144e84576605e20d685122592f369357 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.242729   23586 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key
	I0912 21:44:11.504413   23586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt ...
	I0912 21:44:11.504445   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt: {Name:mk572a0f997bc4f5d0bf96106cec190c8a216bf5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.504614   23586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key ...
	I0912 21:44:11.504624   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key: {Name:mk4f5ba962208af88eb14f6a95c3c895d19f0b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.504728   23586 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.key
	I0912 21:44:11.504741   23586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt with IP's: []
	I0912 21:44:11.648214   23586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt ...
	I0912 21:44:11.648242   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: {Name:mkc84705659187c591ef2c3886da881740395fc9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.648402   23586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.key ...
	I0912 21:44:11.648412   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.key: {Name:mk332035047f6a5f508f66040a43298981c3ddc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.648476   23586 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.key.dd3b5fb2
	I0912 21:44:11.648491   23586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0912 21:44:11.770569   23586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.crt.dd3b5fb2 ...
	I0912 21:44:11.770597   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.crt.dd3b5fb2: {Name:mk05c9632f1437b387a9370bd3f7407ccfcab076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.770750   23586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.key.dd3b5fb2 ...
	I0912 21:44:11.770760   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.key.dd3b5fb2: {Name:mkd01a9d1d517a7f1c116d61b1b3f736fb861bc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.770823   23586 certs.go:337] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.crt
	I0912 21:44:11.770886   23586 certs.go:341] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.key
	I0912 21:44:11.770927   23586 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/proxy-client.key
	I0912 21:44:11.770942   23586 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/proxy-client.crt with IP's: []
	I0912 21:44:11.926749   23586 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/proxy-client.crt ...
	I0912 21:44:11.926778   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/proxy-client.crt: {Name:mk27821a35147288fcb6c2eeed6a3e20a8840258 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.926930   23586 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/proxy-client.key ...
	I0912 21:44:11.926941   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/proxy-client.key: {Name:mk25b821ca20b48c4ea0da548bce91084ba9e554 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:11.927094   23586 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 21:44:11.927126   23586 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:44:11.927157   23586 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:44:11.927180   23586 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem (1679 bytes)
	I0912 21:44:11.927688   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 21:44:11.947749   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 21:44:11.968157   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:44:11.987657   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 21:44:12.008016   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:44:12.027820   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:44:12.047695   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:44:12.067323   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 21:44:12.086520   23586 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:44:12.105553   23586 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:44:12.119641   23586 ssh_runner.go:195] Run: openssl version
	I0912 21:44:12.124177   23586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:44:12.131682   23586 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:44:12.134407   23586 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:44:12.134453   23586 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:44:12.140205   23586 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:44:12.147786   23586 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 21:44:12.150563   23586 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 21:44:12.150601   23586 kubeadm.go:404] StartCluster: {Name:addons-348433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-348433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 21:44:12.150669   23586 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 21:44:12.150704   23586 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 21:44:12.180455   23586 cri.go:89] found id: ""
	I0912 21:44:12.180510   23586 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:44:12.187823   23586 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:44:12.194972   23586 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0912 21:44:12.195030   23586 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:44:12.201959   23586 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:44:12.202005   23586 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 21:44:12.244250   23586 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0912 21:44:12.244371   23586 kubeadm.go:322] [preflight] Running pre-flight checks
	I0912 21:44:12.276246   23586 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0912 21:44:12.276336   23586 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0912 21:44:12.276386   23586 kubeadm.go:322] OS: Linux
	I0912 21:44:12.276456   23586 kubeadm.go:322] CGROUPS_CPU: enabled
	I0912 21:44:12.276517   23586 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0912 21:44:12.276567   23586 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0912 21:44:12.276681   23586 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0912 21:44:12.276763   23586 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0912 21:44:12.276841   23586 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0912 21:44:12.276922   23586 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0912 21:44:12.276996   23586 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0912 21:44:12.277058   23586 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0912 21:44:12.335126   23586 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:44:12.335294   23586 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:44:12.335445   23586 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 21:44:12.512615   23586 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:44:12.515179   23586 out.go:204]   - Generating certificates and keys ...
	I0912 21:44:12.515320   23586 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0912 21:44:12.515423   23586 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0912 21:44:12.566556   23586 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:44:12.698380   23586 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:44:12.841345   23586 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:44:12.965877   23586 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0912 21:44:13.046372   23586 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0912 21:44:13.046534   23586 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-348433 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:44:13.134783   23586 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0912 21:44:13.134892   23586 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-348433 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:44:13.236859   23586 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:44:13.329943   23586 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:44:13.371517   23586 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0912 21:44:13.371581   23586 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:44:13.469870   23586 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:44:13.652144   23586 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:44:13.843280   23586 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:44:14.042723   23586 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:44:14.043611   23586 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:44:14.045958   23586 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:44:14.048151   23586 out.go:204]   - Booting up control plane ...
	I0912 21:44:14.048268   23586 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:44:14.048401   23586 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:44:14.048763   23586 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:44:14.056653   23586 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:44:14.057416   23586 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:44:14.057457   23586 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0912 21:44:14.129088   23586 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 21:44:19.131194   23586 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002171 seconds
	I0912 21:44:19.131394   23586 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:44:19.142006   23586 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:44:19.661058   23586 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:44:19.661348   23586 kubeadm.go:322] [mark-control-plane] Marking the node addons-348433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 21:44:20.169781   23586 kubeadm.go:322] [bootstrap-token] Using token: a082co.48shmezkozcwwq2e
	I0912 21:44:20.171306   23586 out.go:204]   - Configuring RBAC rules ...
	I0912 21:44:20.171448   23586 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:44:20.175113   23586 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:44:20.180408   23586 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:44:20.182902   23586 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:44:20.186100   23586 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:44:20.188654   23586 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:44:20.197069   23586 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:44:20.433811   23586 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0912 21:44:20.579152   23586 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0912 21:44:20.621099   23586 kubeadm.go:322] 
	I0912 21:44:20.621272   23586 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0912 21:44:20.621295   23586 kubeadm.go:322] 
	I0912 21:44:20.621422   23586 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0912 21:44:20.621441   23586 kubeadm.go:322] 
	I0912 21:44:20.621473   23586 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0912 21:44:20.621566   23586 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:44:20.621665   23586 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:44:20.621691   23586 kubeadm.go:322] 
	I0912 21:44:20.621785   23586 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0912 21:44:20.621796   23586 kubeadm.go:322] 
	I0912 21:44:20.621857   23586 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 21:44:20.621867   23586 kubeadm.go:322] 
	I0912 21:44:20.621959   23586 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0912 21:44:20.622060   23586 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:44:20.622160   23586 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:44:20.622171   23586 kubeadm.go:322] 
	I0912 21:44:20.622288   23586 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:44:20.622411   23586 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0912 21:44:20.622420   23586 kubeadm.go:322] 
	I0912 21:44:20.622527   23586 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token a082co.48shmezkozcwwq2e \
	I0912 21:44:20.622655   23586 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 \
	I0912 21:44:20.622687   23586 kubeadm.go:322] 	--control-plane 
	I0912 21:44:20.622696   23586 kubeadm.go:322] 
	I0912 21:44:20.622797   23586 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:44:20.622808   23586 kubeadm.go:322] 
	I0912 21:44:20.622892   23586 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token a082co.48shmezkozcwwq2e \
	I0912 21:44:20.623031   23586 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 
	I0912 21:44:20.624266   23586 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0912 21:44:20.624418   23586 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:44:20.624460   23586 cni.go:84] Creating CNI manager for ""
	I0912 21:44:20.624473   23586 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 21:44:20.625965   23586 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 21:44:20.627403   23586 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 21:44:20.631241   23586 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0912 21:44:20.631254   23586 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 21:44:20.647483   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 21:44:21.254199   23586 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:44:21.254259   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:21.254320   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1 minikube.k8s.io/name=addons-348433 minikube.k8s.io/updated_at=2023_09_12T21_44_21_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:21.328899   23586 ops.go:34] apiserver oom_adj: -16
	I0912 21:44:21.329029   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:21.388978   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:21.948941   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:22.449189   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:22.948565   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:23.448383   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:23.948733   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:24.448346   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:24.948707   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:25.448933   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:25.949310   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:26.448904   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:26.948900   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:27.449129   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:27.949145   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:28.448773   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:28.949190   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:29.448991   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:29.948344   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:30.448532   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:30.948703   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:31.448886   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:31.949048   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:32.448425   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:32.948700   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:33.449274   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:33.948722   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:34.448642   23586 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:44:34.510701   23586 kubeadm.go:1081] duration metric: took 13.256487022s to wait for elevateKubeSystemPrivileges.
	I0912 21:44:34.510729   23586 kubeadm.go:406] StartCluster complete in 22.360132432s
	I0912 21:44:34.510747   23586 settings.go:142] acquiring lock: {Name:mk27d6c9e2209c1484da49df89f359f1b22a9261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:34.510837   23586 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 21:44:34.511159   23586 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/kubeconfig: {Name:mk41a52745552a5cecc3511e6da68b50fcd6941f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:44:34.511323   23586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:44:34.511342   23586 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0912 21:44:34.511469   23586 addons.go:69] Setting volumesnapshots=true in profile "addons-348433"
	I0912 21:44:34.511490   23586 addons.go:231] Setting addon volumesnapshots=true in "addons-348433"
	I0912 21:44:34.511510   23586 addons.go:69] Setting ingress=true in profile "addons-348433"
	I0912 21:44:34.511521   23586 addons.go:69] Setting default-storageclass=true in profile "addons-348433"
	I0912 21:44:34.511527   23586 config.go:182] Loaded profile config "addons-348433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 21:44:34.511537   23586 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-348433"
	I0912 21:44:34.511538   23586 addons.go:69] Setting gcp-auth=true in profile "addons-348433"
	I0912 21:44:34.511544   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.511553   23586 mustload.go:65] Loading cluster: addons-348433
	I0912 21:44:34.511531   23586 addons.go:231] Setting addon ingress=true in "addons-348433"
	I0912 21:44:34.511588   23586 addons.go:69] Setting metrics-server=true in profile "addons-348433"
	I0912 21:44:34.511606   23586 addons.go:231] Setting addon metrics-server=true in "addons-348433"
	I0912 21:44:34.511612   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.511619   23586 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-348433"
	I0912 21:44:34.511668   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.511678   23586 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-348433"
	I0912 21:44:34.511717   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.511727   23586 addons.go:69] Setting helm-tiller=true in profile "addons-348433"
	I0912 21:44:34.511742   23586 config.go:182] Loaded profile config "addons-348433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 21:44:34.511752   23586 addons.go:231] Setting addon helm-tiller=true in "addons-348433"
	I0912 21:44:34.511794   23586 addons.go:69] Setting storage-provisioner=true in profile "addons-348433"
	I0912 21:44:34.511818   23586 addons.go:231] Setting addon storage-provisioner=true in "addons-348433"
	I0912 21:44:34.511822   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.511849   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.511915   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.511953   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.512109   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.512119   23586 addons.go:69] Setting ingress-dns=true in profile "addons-348433"
	I0912 21:44:34.512135   23586 addons.go:231] Setting addon ingress-dns=true in "addons-348433"
	I0912 21:44:34.512143   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.512173   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.512242   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.512253   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.512489   23586 addons.go:69] Setting inspektor-gadget=true in profile "addons-348433"
	I0912 21:44:34.512511   23586 addons.go:231] Setting addon inspektor-gadget=true in "addons-348433"
	I0912 21:44:34.512548   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.511560   23586 addons.go:69] Setting cloud-spanner=true in profile "addons-348433"
	I0912 21:44:34.512636   23586 addons.go:231] Setting addon cloud-spanner=true in "addons-348433"
	I0912 21:44:34.512678   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.512821   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.512983   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.513070   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.512112   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.511779   23586 addons.go:69] Setting registry=true in profile "addons-348433"
	I0912 21:44:34.513653   23586 addons.go:231] Setting addon registry=true in "addons-348433"
	I0912 21:44:34.513721   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.514171   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.512549   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.540993   23586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0912 21:44:34.543249   23586 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0912 21:44:34.543183   23586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0912 21:44:34.544960   23586 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0912 21:44:34.544980   23586 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0912 21:44:34.545120   23586 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0912 21:44:34.546417   23586 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0912 21:44:34.546511   23586 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0912 21:44:34.546540   23586 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0912 21:44:34.546552   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0912 21:44:34.547774   23586 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0912 21:44:34.547831   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.547829   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0912 21:44:34.547838   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0912 21:44:34.549105   23586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0912 21:44:34.549165   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.549236   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.550673   23586 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-348433" context rescaled to 1 replicas
	I0912 21:44:34.550732   23586 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:44:34.551873   23586 out.go:177] * Verifying Kubernetes components...
	I0912 21:44:34.551009   23586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0912 21:44:34.553181   23586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:44:34.553102   23586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0912 21:44:34.555473   23586 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:44:34.556838   23586 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0912 21:44:34.556062   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.558083   23586 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0912 21:44:34.558168   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0912 21:44:34.558220   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.558382   23586 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:44:34.558398   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:44:34.558441   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.558133   23586 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0912 21:44:34.562764   23586 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0912 21:44:34.562781   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0912 21:44:34.562829   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.577676   23586 addons.go:231] Setting addon default-storageclass=true in "addons-348433"
	I0912 21:44:34.577727   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:34.578193   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:34.590232   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.599653   23586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0912 21:44:34.601204   23586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0912 21:44:34.602409   23586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0912 21:44:34.604087   23586 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:44:34.604106   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0912 21:44:34.604158   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.611379   23586 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.10
	I0912 21:44:34.612672   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.610708   23586 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:44:34.612833   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:44:34.612886   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.612691   23586 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0912 21:44:34.613094   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0912 21:44:34.613132   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.614206   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.617539   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.633464   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.636584   23586 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0912 21:44:34.638500   23586 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:44:34.638520   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0912 21:44:34.638590   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.641388   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.655946   23586 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0912 21:44:34.657707   23586 out.go:177]   - Using image docker.io/registry:2.8.1
	I0912 21:44:34.657313   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.659064   23586 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0912 21:44:34.659079   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0912 21:44:34.658215   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.659672   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:34.663347   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.675018   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.680346   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:34.843395   23586 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:44:34.844464   23586 node_ready.go:35] waiting up to 6m0s for node "addons-348433" to be "Ready" ...
	I0912 21:44:34.847108   23586 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0912 21:44:34.847128   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0912 21:44:34.942485   23586 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0912 21:44:34.942516   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0912 21:44:35.022360   23586 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0912 21:44:35.022448   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0912 21:44:35.022510   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0912 21:44:35.029726   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0912 21:44:35.030276   23586 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0912 21:44:35.030296   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0912 21:44:35.038658   23586 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0912 21:44:35.038686   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0912 21:44:35.132355   23586 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0912 21:44:35.132442   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0912 21:44:35.137390   23586 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0912 21:44:35.137416   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0912 21:44:35.140782   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:44:35.221504   23586 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0912 21:44:35.221535   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0912 21:44:35.224554   23586 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:44:35.224575   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0912 21:44:35.231783   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0912 21:44:35.232952   23586 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:44:35.232977   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0912 21:44:35.237046   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:44:35.330221   23586 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0912 21:44:35.330245   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0912 21:44:35.338391   23586 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0912 21:44:35.338464   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0912 21:44:35.345059   23586 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0912 21:44:35.345086   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0912 21:44:35.430016   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0912 21:44:35.432516   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0912 21:44:35.434030   23586 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:44:35.434094   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0912 21:44:35.542761   23586 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0912 21:44:35.542852   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0912 21:44:35.544800   23586 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0912 21:44:35.544823   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0912 21:44:35.622431   23586 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0912 21:44:35.622471   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0912 21:44:35.638995   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0912 21:44:35.824535   23586 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0912 21:44:35.824575   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0912 21:44:35.836052   23586 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0912 21:44:35.836084   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0912 21:44:35.934408   23586 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0912 21:44:35.934439   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0912 21:44:36.023600   23586 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0912 21:44:36.023630   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0912 21:44:36.227045   23586 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0912 21:44:36.227150   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0912 21:44:36.322546   23586 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0912 21:44:36.322615   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0912 21:44:36.440623   23586 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0912 21:44:36.440652   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0912 21:44:36.636008   23586 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0912 21:44:36.636035   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0912 21:44:36.723177   23586 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:44:36.723264   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0912 21:44:36.921509   23586 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.07806823s)
	I0912 21:44:36.921552   23586 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0912 21:44:36.929009   23586 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:44:36.929082   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0912 21:44:37.031172   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0912 21:44:37.032266   23586 node_ready.go:58] node "addons-348433" has status "Ready":"False"
	I0912 21:44:37.231931   23586 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0912 21:44:37.231956   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0912 21:44:37.339450   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:44:37.440447   23586 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:44:37.440474   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0912 21:44:37.839096   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0912 21:44:38.427071   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.404506581s)
	I0912 21:44:39.521555   23586 node_ready.go:58] node "addons-348433" has status "Ready":"False"
	I0912 21:44:40.847632   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.817864823s)
	I0912 21:44:40.847665   23586 addons.go:467] Verifying addon ingress=true in "addons-348433"
	I0912 21:44:40.849220   23586 out.go:177] * Verifying ingress addon...
	I0912 21:44:40.848026   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.707207881s)
	I0912 21:44:40.848099   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.616277678s)
	I0912 21:44:40.848157   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.611078661s)
	I0912 21:44:40.848287   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (5.418119956s)
	I0912 21:44:40.848356   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.41573395s)
	I0912 21:44:40.848397   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.209306063s)
	I0912 21:44:40.848491   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.81727788s)
	I0912 21:44:40.848643   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.509158103s)
	I0912 21:44:40.850875   23586 addons.go:467] Verifying addon metrics-server=true in "addons-348433"
	I0912 21:44:40.850892   23586 addons.go:467] Verifying addon registry=true in "addons-348433"
	I0912 21:44:40.852235   23586 out.go:177] * Verifying registry addon...
	W0912 21:44:40.850876   23586 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:44:40.851658   23586 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0912 21:44:40.853626   23586 retry.go:31] will retry after 223.598796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0912 21:44:40.854392   23586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0912 21:44:40.858194   23586 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0912 21:44:40.858210   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:40.924982   23586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:44:40.925011   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:40.925119   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:40.927998   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:41.077736   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0912 21:44:41.363739   23586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0912 21:44:41.363811   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:41.389733   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:41.430699   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:41.436877   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:41.721409   23586 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0912 21:44:41.842788   23586 addons.go:231] Setting addon gcp-auth=true in "addons-348433"
	I0912 21:44:41.842846   23586 host.go:66] Checking if "addons-348433" exists ...
	I0912 21:44:41.843328   23586 cli_runner.go:164] Run: docker container inspect addons-348433 --format={{.State.Status}}
	I0912 21:44:41.858711   23586 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0912 21:44:41.858753   23586 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-348433
	I0912 21:44:41.873483   23586 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32772 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/addons-348433/id_rsa Username:docker}
	I0912 21:44:41.936815   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:41.942620   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:41.945050   23586 node_ready.go:58] node "addons-348433" has status "Ready":"False"
	I0912 21:44:42.437949   23586 node_ready.go:49] node "addons-348433" has status "Ready":"True"
	I0912 21:44:42.437972   23586 node_ready.go:38] duration metric: took 7.593487472s waiting for node "addons-348433" to be "Ready" ...
	I0912 21:44:42.437982   23586 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:44:42.442672   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:42.539665   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:42.930111   23586 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lx8mt" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:42.945965   23586 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0912 21:44:42.945989   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:43.026873   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:43.434569   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:43.443618   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:43.933038   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:43.934684   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:44.538816   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:44.542219   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:44.943449   23586 pod_ready.go:102] pod "coredns-5dd5756b68-lx8mt" in "kube-system" namespace has status "Ready":"False"
	I0912 21:44:44.944843   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:44.946828   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:45.449062   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:45.524144   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:45.538422   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.699239188s)
	I0912 21:44:45.538481   23586 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-348433"
	I0912 21:44:45.540185   23586 out.go:177] * Verifying csi-hostpath-driver addon...
	I0912 21:44:45.542616   23586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0912 21:44:45.638176   23586 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0912 21:44:45.638253   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:45.725643   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:45.941030   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:46.026610   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.948812949s)
	I0912 21:44:46.026725   23586 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.167984572s)
	I0912 21:44:46.028461   23586 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0912 21:44:46.027575   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:46.030028   23586 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0912 21:44:46.031659   23586 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0912 21:44:46.031680   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0912 21:44:46.141200   23586 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0912 21:44:46.141220   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0912 21:44:46.233900   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:46.321906   23586 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:44:46.321930   23586 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0912 21:44:46.342118   23586 pod_ready.go:92] pod "coredns-5dd5756b68-lx8mt" in "kube-system" namespace has status "Ready":"True"
	I0912 21:44:46.342142   23586 pod_ready.go:81] duration metric: took 3.411603293s waiting for pod "coredns-5dd5756b68-lx8mt" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.342160   23586 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-348433" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.346127   23586 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0912 21:44:46.428476   23586 pod_ready.go:92] pod "etcd-addons-348433" in "kube-system" namespace has status "Ready":"True"
	I0912 21:44:46.428730   23586 pod_ready.go:81] duration metric: took 86.558363ms waiting for pod "etcd-addons-348433" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.428787   23586 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-348433" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.432626   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:46.434838   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:46.436438   23586 pod_ready.go:92] pod "kube-apiserver-addons-348433" in "kube-system" namespace has status "Ready":"True"
	I0912 21:44:46.436470   23586 pod_ready.go:81] duration metric: took 7.663877ms waiting for pod "kube-apiserver-addons-348433" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.436482   23586 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-348433" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.442365   23586 pod_ready.go:92] pod "kube-controller-manager-addons-348433" in "kube-system" namespace has status "Ready":"True"
	I0912 21:44:46.442436   23586 pod_ready.go:81] duration metric: took 5.922604ms waiting for pod "kube-controller-manager-addons-348433" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.442462   23586 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mkjtr" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.524825   23586 pod_ready.go:92] pod "kube-proxy-mkjtr" in "kube-system" namespace has status "Ready":"True"
	I0912 21:44:46.524885   23586 pod_ready.go:81] duration metric: took 82.406345ms waiting for pod "kube-proxy-mkjtr" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.524920   23586 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-348433" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.733554   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:46.737726   23586 pod_ready.go:92] pod "kube-scheduler-addons-348433" in "kube-system" namespace has status "Ready":"True"
	I0912 21:44:46.737808   23586 pod_ready.go:81] duration metric: took 212.869321ms waiting for pod "kube-scheduler-addons-348433" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.737834   23586 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-7c66d45ddc-xc9jw" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:46.930386   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:46.933679   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:47.231931   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:47.430580   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:47.433819   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:47.731768   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:47.929637   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:47.934606   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:48.133523   23586 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.787358321s)
	I0912 21:44:48.134353   23586 addons.go:467] Verifying addon gcp-auth=true in "addons-348433"
	I0912 21:44:48.136369   23586 out.go:177] * Verifying gcp-auth addon...
	I0912 21:44:48.138946   23586 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0912 21:44:48.141860   23586 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0912 21:44:48.141875   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:48.148113   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:48.231044   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:48.430438   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:48.433120   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:48.651463   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:48.731096   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:48.929575   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:48.932621   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:49.043955   23586 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xc9jw" in "kube-system" namespace has status "Ready":"False"
	I0912 21:44:49.152083   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:49.231569   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:49.429674   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:49.433854   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:49.652168   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:49.731271   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:49.931220   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:49.932319   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:50.151162   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:50.230822   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:50.430080   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:50.431741   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:50.651488   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:50.731914   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:50.930298   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:50.932185   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:51.152154   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:51.230698   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:51.429835   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:51.431913   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:51.542982   23586 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xc9jw" in "kube-system" namespace has status "Ready":"False"
	I0912 21:44:51.651820   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:51.731076   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:51.929184   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:51.931847   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:52.151175   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:52.231096   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:52.430083   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:52.431694   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:52.651344   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:52.731197   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:52.928554   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:52.931973   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:53.152221   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:53.230497   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:53.429677   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:53.432571   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:53.651968   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:53.731788   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:53.932655   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:53.933513   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:54.043446   23586 pod_ready.go:102] pod "metrics-server-7c66d45ddc-xc9jw" in "kube-system" namespace has status "Ready":"False"
	I0912 21:44:54.151209   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:54.231039   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:54.429200   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:54.431645   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:54.651490   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:54.730638   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:54.930396   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:54.932353   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:55.152695   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:55.233271   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:55.429045   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:55.432964   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:55.652385   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:55.731338   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:55.929133   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:55.932410   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:56.151883   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:56.231354   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:56.429093   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:56.431468   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:56.542707   23586 pod_ready.go:92] pod "metrics-server-7c66d45ddc-xc9jw" in "kube-system" namespace has status "Ready":"True"
	I0912 21:44:56.542726   23586 pod_ready.go:81] duration metric: took 9.804874977s waiting for pod "metrics-server-7c66d45ddc-xc9jw" in "kube-system" namespace to be "Ready" ...
	I0912 21:44:56.542745   23586 pod_ready.go:38] duration metric: took 14.104734555s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:44:56.542760   23586 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:44:56.542797   23586 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:44:56.553566   23586 api_server.go:72] duration metric: took 22.002800891s to wait for apiserver process to appear ...
	I0912 21:44:56.553589   23586 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:44:56.553605   23586 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0912 21:44:56.558209   23586 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0912 21:44:56.559125   23586 api_server.go:141] control plane version: v1.28.1
	I0912 21:44:56.559147   23586 api_server.go:131] duration metric: took 5.552399ms to wait for apiserver health ...
	I0912 21:44:56.559157   23586 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:44:56.566151   23586 system_pods.go:59] 18 kube-system pods found
	I0912 21:44:56.566175   23586 system_pods.go:61] "coredns-5dd5756b68-lx8mt" [c831d401-2e45-467c-a3c1-6d5404a80fd4] Running
	I0912 21:44:56.566182   23586 system_pods.go:61] "csi-hostpath-attacher-0" [e30bc7a9-29df-4468-9e70-dd90c71a870a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:44:56.566190   23586 system_pods.go:61] "csi-hostpath-resizer-0" [1e0c7f52-e0d6-4bfc-b0e4-4db0c2bfbc66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:44:56.566197   23586 system_pods.go:61] "csi-hostpathplugin-t6mmt" [8d01844e-baa5-4a06-bdf6-65607530a1a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:44:56.566202   23586 system_pods.go:61] "etcd-addons-348433" [b956e8fd-afe2-4e8f-b782-14ccee8b44e9] Running
	I0912 21:44:56.566210   23586 system_pods.go:61] "kindnet-2lfdv" [5d1bd6fc-dcac-4dc9-9e59-617af8f4c18c] Running
	I0912 21:44:56.566215   23586 system_pods.go:61] "kube-apiserver-addons-348433" [7171c9cb-f86b-4acf-8f37-bb67e9b43f77] Running
	I0912 21:44:56.566227   23586 system_pods.go:61] "kube-controller-manager-addons-348433" [8400f344-38ad-4131-9ce3-5b298e5642e6] Running
	I0912 21:44:56.566233   23586 system_pods.go:61] "kube-ingress-dns-minikube" [320aeba6-b9c0-4415-835e-f265a0278621] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:44:56.566240   23586 system_pods.go:61] "kube-proxy-mkjtr" [4f3f8bfb-7fcc-413f-837f-3de95c8273d8] Running
	I0912 21:44:56.566245   23586 system_pods.go:61] "kube-scheduler-addons-348433" [b595210b-3505-4482-8bb8-e862f2587d4d] Running
	I0912 21:44:56.566251   23586 system_pods.go:61] "metrics-server-7c66d45ddc-xc9jw" [8592098a-a0be-4d41-b6c5-cfe27b75aa31] Running
	I0912 21:44:56.566257   23586 system_pods.go:61] "registry-99mmd" [bd420210-8e2d-41d5-8549-97497ba31036] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:44:56.566265   23586 system_pods.go:61] "registry-proxy-f8tqr" [9f8000c3-3670-4964-94d7-368bd159e70f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:44:56.566274   23586 system_pods.go:61] "snapshot-controller-58dbcc7b99-j7gd8" [3c78a3d5-2dc1-406b-8b4d-5b3550cc786c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:44:56.566284   23586 system_pods.go:61] "snapshot-controller-58dbcc7b99-qjs7j" [43eaba56-bb36-4781-b08a-d9c88fe2e5ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:44:56.566291   23586 system_pods.go:61] "storage-provisioner" [82fc679a-97eb-4dd3-910d-72394d0eb68c] Running
	I0912 21:44:56.566296   23586 system_pods.go:61] "tiller-deploy-7b677967b9-pqpkv" [2cf5d2c4-bcda-4708-887e-1364874e0576] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:44:56.566303   23586 system_pods.go:74] duration metric: took 7.141188ms to wait for pod list to return data ...
	I0912 21:44:56.566310   23586 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:44:56.568125   23586 default_sa.go:45] found service account: "default"
	I0912 21:44:56.568140   23586 default_sa.go:55] duration metric: took 1.823514ms for default service account to be created ...
	I0912 21:44:56.568147   23586 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:44:56.577458   23586 system_pods.go:86] 18 kube-system pods found
	I0912 21:44:56.577481   23586 system_pods.go:89] "coredns-5dd5756b68-lx8mt" [c831d401-2e45-467c-a3c1-6d5404a80fd4] Running
	I0912 21:44:56.577491   23586 system_pods.go:89] "csi-hostpath-attacher-0" [e30bc7a9-29df-4468-9e70-dd90c71a870a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0912 21:44:56.577499   23586 system_pods.go:89] "csi-hostpath-resizer-0" [1e0c7f52-e0d6-4bfc-b0e4-4db0c2bfbc66] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0912 21:44:56.577509   23586 system_pods.go:89] "csi-hostpathplugin-t6mmt" [8d01844e-baa5-4a06-bdf6-65607530a1a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0912 21:44:56.577517   23586 system_pods.go:89] "etcd-addons-348433" [b956e8fd-afe2-4e8f-b782-14ccee8b44e9] Running
	I0912 21:44:56.577525   23586 system_pods.go:89] "kindnet-2lfdv" [5d1bd6fc-dcac-4dc9-9e59-617af8f4c18c] Running
	I0912 21:44:56.577537   23586 system_pods.go:89] "kube-apiserver-addons-348433" [7171c9cb-f86b-4acf-8f37-bb67e9b43f77] Running
	I0912 21:44:56.577549   23586 system_pods.go:89] "kube-controller-manager-addons-348433" [8400f344-38ad-4131-9ce3-5b298e5642e6] Running
	I0912 21:44:56.577562   23586 system_pods.go:89] "kube-ingress-dns-minikube" [320aeba6-b9c0-4415-835e-f265a0278621] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0912 21:44:56.577572   23586 system_pods.go:89] "kube-proxy-mkjtr" [4f3f8bfb-7fcc-413f-837f-3de95c8273d8] Running
	I0912 21:44:56.577579   23586 system_pods.go:89] "kube-scheduler-addons-348433" [b595210b-3505-4482-8bb8-e862f2587d4d] Running
	I0912 21:44:56.577585   23586 system_pods.go:89] "metrics-server-7c66d45ddc-xc9jw" [8592098a-a0be-4d41-b6c5-cfe27b75aa31] Running
	I0912 21:44:56.577592   23586 system_pods.go:89] "registry-99mmd" [bd420210-8e2d-41d5-8549-97497ba31036] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0912 21:44:56.577604   23586 system_pods.go:89] "registry-proxy-f8tqr" [9f8000c3-3670-4964-94d7-368bd159e70f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0912 21:44:56.577619   23586 system_pods.go:89] "snapshot-controller-58dbcc7b99-j7gd8" [3c78a3d5-2dc1-406b-8b4d-5b3550cc786c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:44:56.577634   23586 system_pods.go:89] "snapshot-controller-58dbcc7b99-qjs7j" [43eaba56-bb36-4781-b08a-d9c88fe2e5ef] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0912 21:44:56.577645   23586 system_pods.go:89] "storage-provisioner" [82fc679a-97eb-4dd3-910d-72394d0eb68c] Running
	I0912 21:44:56.577656   23586 system_pods.go:89] "tiller-deploy-7b677967b9-pqpkv" [2cf5d2c4-bcda-4708-887e-1364874e0576] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0912 21:44:56.577667   23586 system_pods.go:126] duration metric: took 9.514225ms to wait for k8s-apps to be running ...
	I0912 21:44:56.577678   23586 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:44:56.577727   23586 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:44:56.588419   23586 system_svc.go:56] duration metric: took 10.732322ms WaitForService to wait for kubelet.
	I0912 21:44:56.588438   23586 kubeadm.go:581] duration metric: took 22.037680909s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 21:44:56.588458   23586 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:44:56.591027   23586 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 21:44:56.591049   23586 node_conditions.go:123] node cpu capacity is 8
	I0912 21:44:56.591060   23586 node_conditions.go:105] duration metric: took 2.597216ms to run NodePressure ...
	I0912 21:44:56.591070   23586 start.go:228] waiting for startup goroutines ...
	I0912 21:44:56.651763   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:56.731890   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:56.929544   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:56.932045   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:57.151847   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:57.231468   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:57.431720   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:57.432263   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:57.652389   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:57.730621   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:57.929389   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:57.931722   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:58.150999   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:58.230362   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:58.430134   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:58.432829   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:58.652025   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:58.732052   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:58.931757   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:58.932440   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:59.151898   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:59.231326   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:59.429252   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:59.432246   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:44:59.652008   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:44:59.730346   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:44:59.929496   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:44:59.931787   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:00.151454   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:00.230704   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:00.429952   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:00.432051   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:00.651760   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:00.730691   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:00.930557   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:00.933109   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:01.229269   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:01.232336   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:01.429539   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:01.432480   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:01.652248   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:01.730935   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:01.930378   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:01.933475   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:02.151620   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:02.231384   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:02.429715   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:02.432102   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:02.652116   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:02.732010   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:02.930194   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:02.933518   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:03.151950   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:03.231552   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:03.429414   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:03.431913   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:03.652071   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:03.730007   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:03.930196   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:03.932847   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:04.172699   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:04.231551   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:04.429745   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:04.431939   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:04.651601   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:04.730767   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:04.929434   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:04.932109   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:05.151055   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:05.231179   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:05.429542   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:05.432959   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:05.651647   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:05.733668   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:05.930727   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:05.934110   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:06.151881   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:06.231013   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:06.430063   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:06.432047   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:06.651707   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:06.731021   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:06.928958   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:06.949550   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:07.152430   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:07.230607   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:07.429797   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:07.434268   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:07.651579   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:07.730648   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:07.929996   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:07.932822   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:08.151093   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:08.230135   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:08.428974   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:08.431586   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:08.651672   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:08.731925   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:08.930128   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:08.932755   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:09.151535   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:09.231396   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:09.429428   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:09.432484   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:09.652016   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:09.731177   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:09.930518   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:09.932156   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:10.152244   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:10.231032   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:10.430089   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:10.432427   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:10.652713   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:10.731456   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:10.929788   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:10.932568   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:11.151737   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:11.231118   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:11.429080   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:11.431457   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:11.651149   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:11.730354   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:11.929027   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:11.931369   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0912 21:45:12.150866   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:12.231360   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:12.429754   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:12.431845   23586 kapi.go:107] duration metric: took 31.577452438s to wait for kubernetes.io/minikube-addons=registry ...
	I0912 21:45:12.651466   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:12.730801   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:12.929619   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:13.152602   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:13.231387   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:13.431160   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:13.651192   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:13.730890   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:13.928535   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:14.151920   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:14.231305   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:14.429465   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:14.652205   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:14.730293   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:14.929300   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:15.151864   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:15.231774   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:15.429238   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:15.651246   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:15.736149   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:15.937445   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:16.222644   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:16.233789   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:16.430233   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:16.723305   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:16.731760   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:16.930452   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:17.223281   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:17.231456   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:17.429666   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:17.723502   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:17.731298   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:17.929878   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:18.151847   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:18.232379   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:18.429178   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:18.652000   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:18.730361   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:18.929447   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:19.151974   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:19.230963   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:19.428989   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:19.651235   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:19.731042   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:19.929714   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:20.151302   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:20.231229   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:20.429187   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:20.653241   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:20.736679   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:20.929248   23586 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0912 21:45:21.151713   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:21.231671   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:21.431440   23586 kapi.go:107] duration metric: took 40.579778639s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0912 21:45:21.651844   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:21.731320   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:22.152363   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:22.230524   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:22.651815   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:22.731423   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:23.151973   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:23.231367   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:23.651864   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:23.731281   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:24.151572   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0912 21:45:24.230671   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:24.651560   23586 kapi.go:107] duration metric: took 36.512612946s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0912 21:45:24.652959   23586 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-348433 cluster.
	I0912 21:45:24.654166   23586 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0912 21:45:24.655332   23586 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0912 21:45:24.730998   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:25.230472   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:25.734582   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:26.230001   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:26.731715   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:27.232181   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:27.730388   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:28.230692   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:28.730592   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:29.230480   23586 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0912 21:45:29.731033   23586 kapi.go:107] duration metric: took 44.188417303s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0912 21:45:29.733854   23586 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, metrics-server, default-storageclass, helm-tiller, inspektor-gadget, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0912 21:45:29.735300   23586 addons.go:502] enable addons completed in 55.22395997s: enabled=[cloud-spanner ingress-dns storage-provisioner metrics-server default-storageclass helm-tiller inspektor-gadget volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0912 21:45:29.735335   23586 start.go:233] waiting for cluster config update ...
	I0912 21:45:29.735348   23586 start.go:242] writing updated cluster config ...
	I0912 21:45:29.735571   23586 ssh_runner.go:195] Run: rm -f paused
	I0912 21:45:29.784470   23586 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0912 21:45:29.786286   23586 out.go:177] * Done! kubectl is now configured to use "addons-348433" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.363554419Z" level=info msg="Pulled image: gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb" id=3c43e0a4-6c51-4095-8ea2-29bc8cfb0281 name=/runtime.v1.ImageService/PullImage
	Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.364312128Z" level=info msg="Checking image status: gcr.io/google-samples/hello-app:1.0" id=e117b155-cc27-4047-8eb2-18bd490196ee name=/runtime.v1.ImageService/ImageStatus
	Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.365263705Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a39a074194753e46f21cfbf0b4253444939f276ed100d23d5fc568ada19a9ebb,RepoTags:[gcr.io/google-samples/hello-app:1.0],RepoDigests:[gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb],Size_:28999826,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=e117b155-cc27-4047-8eb2-18bd490196ee name=/runtime.v1.ImageService/ImageStatus
	Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.366041860Z" level=info msg="Creating container: default/hello-world-app-5d77478584-l8dkw/hello-world-app" id=c1ef0f07-b2e7-4ab5-a919-973db4cc9f39 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.366414530Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.451674342Z" level=info msg="Created container 8584cf20fbae3a0ac981b5cee314112e536e4152a3e9274c840865bc4503befa: default/hello-world-app-5d77478584-l8dkw/hello-world-app" id=c1ef0f07-b2e7-4ab5-a919-973db4cc9f39 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.452196040Z" level=info msg="Starting container: 8584cf20fbae3a0ac981b5cee314112e536e4152a3e9274c840865bc4503befa" id=290e234e-372b-4e9b-8d08-ee6d7f417b61 name=/runtime.v1.RuntimeService/StartContainer
	Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.459827219Z" level=info msg="Started container" PID=9320 containerID=8584cf20fbae3a0ac981b5cee314112e536e4152a3e9274c840865bc4503befa description=default/hello-world-app-5d77478584-l8dkw/hello-world-app id=290e234e-372b-4e9b-8d08-ee6d7f417b61 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e69d66fce4ebe37140573cba41cdb5fd484d02977e99b1a609d79e0991796bda
	Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.818680534Z" level=info msg="Removing container: 6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3" id=5a45ef31-91f5-4308-a5bd-2d0ab0788575 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 12 21:48:00 addons-348433 crio[947]: time="2023-09-12 21:48:00.833360746Z" level=info msg="Removed container 6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3: kube-system/kube-ingress-dns-minikube/minikube-ingress-dns" id=5a45ef31-91f5-4308-a5bd-2d0ab0788575 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 12 21:48:02 addons-348433 crio[947]: time="2023-09-12 21:48:02.344230959Z" level=info msg="Stopping container: 17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9 (timeout: 2s)" id=dc488dc9-e225-43c8-b1fb-cb6eccb27dc7 name=/runtime.v1.RuntimeService/StopContainer
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.351734880Z" level=warning msg="Stopping container 17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=dc488dc9-e225-43c8-b1fb-cb6eccb27dc7 name=/runtime.v1.RuntimeService/StopContainer
	Sep 12 21:48:04 addons-348433 conmon[5298]: conmon 17aefbcceb43d7b79a46 <ninfo>: container 5310 exited with status 137
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.492933065Z" level=info msg="Stopped container 17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9: ingress-nginx/ingress-nginx-controller-798b8b85d7-q92t9/controller" id=dc488dc9-e225-43c8-b1fb-cb6eccb27dc7 name=/runtime.v1.RuntimeService/StopContainer
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.493412047Z" level=info msg="Stopping pod sandbox: 668333b767bdcc85a3e2f5d28be338bcf93213bda524f731bfac07549fa9fcaf" id=3918705e-625f-407c-80f3-c2414c6a077c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.496141806Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-A6D2CECBZFZFDTII - [0:0]\n:KUBE-HP-DEJOVF4MF5I2KVZR - [0:0]\n-X KUBE-HP-A6D2CECBZFZFDTII\n-X KUBE-HP-DEJOVF4MF5I2KVZR\nCOMMIT\n"
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.497378569Z" level=info msg="Closing host port tcp:80"
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.497410732Z" level=info msg="Closing host port tcp:443"
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.498569471Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.498583115Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.498700344Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-798b8b85d7-q92t9 Namespace:ingress-nginx ID:668333b767bdcc85a3e2f5d28be338bcf93213bda524f731bfac07549fa9fcaf UID:ea1a7973-c383-4e44-846d-1b18dfaf6d6f NetNS:/var/run/netns/941ec69e-53dc-4f24-8145-380b87f2853b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.498808012Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-798b8b85d7-q92t9 from CNI network \"kindnet\" (type=ptp)"
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.538047355Z" level=info msg="Stopped pod sandbox: 668333b767bdcc85a3e2f5d28be338bcf93213bda524f731bfac07549fa9fcaf" id=3918705e-625f-407c-80f3-c2414c6a077c name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.830382727Z" level=info msg="Removing container: 17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9" id=46bf3c5c-a0a4-4e4b-a39b-9bd22f1cb6ba name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 12 21:48:04 addons-348433 crio[947]: time="2023-09-12 21:48:04.845038904Z" level=info msg="Removed container 17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9: ingress-nginx/ingress-nginx-controller-798b8b85d7-q92t9/controller" id=46bf3c5c-a0a4-4e4b-a39b-9bd22f1cb6ba name=/runtime.v1.RuntimeService/RemoveContainer
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8584cf20fbae3       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb                      8 seconds ago       Running             hello-world-app           0                   e69d66fce4ebe       hello-world-app-5d77478584-l8dkw
	8f6192f9a37b5       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                              2 minutes ago       Running             nginx                     0                   9a561cc2cf056       nginx
	285093b167096       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                        2 minutes ago       Running             headlamp                  0                   d996bd3def1ef       headlamp-699c48fb74-crbr7
	b82a2c9c9e2f8       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   a913d0e2df1ce       gcp-auth-d4c87556c-c9rjx
	64b0f5e97cef3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              patch                     0                   0b3718c8672fb       ingress-nginx-admission-patch-t8qws
	7f7092e7f3068       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d   3 minutes ago       Exited              create                    0                   a6686464b5785       ingress-nginx-admission-create-bbcpq
	36d2d5a0d0113       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             3 minutes ago       Running             coredns                   0                   05fe305890abd       coredns-5dd5756b68-lx8mt
	c4469b0779f43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   8010f9f3d138a       storage-provisioner
	3912baca249c1       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                           3 minutes ago       Running             kindnet-cni               0                   30e4822cf44a1       kindnet-2lfdv
	d9b096f614a6f       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                                             3 minutes ago       Running             kube-proxy                0                   c2033d75dc29e       kube-proxy-mkjtr
	36a2899032a3b       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                                             3 minutes ago       Running             kube-apiserver            0                   6782df69c6782       kube-apiserver-addons-348433
	d4262472c4df7       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                                             3 minutes ago       Running             kube-scheduler            0                   d91fb4f56ccb2       kube-scheduler-addons-348433
	f755fc766a756       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             3 minutes ago       Running             etcd                      0                   5477c0815e3c0       etcd-addons-348433
	2cf8beb4e0dd4       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                                             3 minutes ago       Running             kube-controller-manager   0                   d724668a83457       kube-controller-manager-addons-348433
	
	* 
	* ==> coredns [36d2d5a0d0113266e55b9b56fa8f4d5ae7809dcd34cbb35a36f3b2c2f58d7318] <==
	* [INFO] 10.244.0.9:49307 - 29125 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000033984s
	[INFO] 10.244.0.9:43994 - 50956 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.003798477s
	[INFO] 10.244.0.9:43994 - 28169 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,rd,ra 91 0.00549031s
	[INFO] 10.244.0.9:52033 - 9567 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004689865s
	[INFO] 10.244.0.9:52033 - 47197 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005185932s
	[INFO] 10.244.0.9:59817 - 31586 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004614811s
	[INFO] 10.244.0.9:59817 - 15996 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004674897s
	[INFO] 10.244.0.9:38649 - 56647 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000052985s
	[INFO] 10.244.0.9:38649 - 16961 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008891s
	[INFO] 10.244.0.18:51450 - 16249 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000201737s
	[INFO] 10.244.0.18:42752 - 25460 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000138691s
	[INFO] 10.244.0.18:42344 - 8679 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00013191s
	[INFO] 10.244.0.18:44031 - 54962 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000134203s
	[INFO] 10.244.0.18:52125 - 23557 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102782s
	[INFO] 10.244.0.18:56899 - 60629 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093443s
	[INFO] 10.244.0.18:36515 - 26466 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.007567441s
	[INFO] 10.244.0.18:57371 - 1171 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 75 0.00951961s
	[INFO] 10.244.0.18:46841 - 53967 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.006667112s
	[INFO] 10.244.0.18:54030 - 60184 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00930999s
	[INFO] 10.244.0.18:45026 - 32391 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.006263127s
	[INFO] 10.244.0.18:44622 - 1316 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.008613618s
	[INFO] 10.244.0.18:56241 - 57142 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 306 0.000653692s
	[INFO] 10.244.0.18:43008 - 29070 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00069971s
	[INFO] 10.244.0.22:58466 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000195723s
	[INFO] 10.244.0.22:34437 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000122761s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-348433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-348433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1
	                    minikube.k8s.io/name=addons-348433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T21_44_21_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-348433
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 21:44:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-348433
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 21:48:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 21:46:23 +0000   Tue, 12 Sep 2023 21:44:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 21:46:23 +0000   Tue, 12 Sep 2023 21:44:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 21:46:23 +0000   Tue, 12 Sep 2023 21:44:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 21:46:23 +0000   Tue, 12 Sep 2023 21:44:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-348433
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 0bb1edb4549045a9be832bd9879d616b
	  System UUID:                aed1f98c-f72e-4696-bb7f-bf2baf87feb9
	  Boot ID:                    ba5f5c49-ab96-46a2-94a7-f55592fcb8c1
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-l8dkw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  gcp-auth                    gcp-auth-d4c87556c-c9rjx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m21s
	  headlamp                    headlamp-699c48fb74-crbr7                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m39s
	  kube-system                 coredns-5dd5756b68-lx8mt                 100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m35s
	  kube-system                 etcd-addons-348433                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m49s
	  kube-system                 kindnet-2lfdv                            100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m35s
	  kube-system                 kube-apiserver-addons-348433             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-addons-348433    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-mkjtr                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-scheduler-addons-348433             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m55s (x9 over 3m55s)  kubelet          Node addons-348433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s (x8 over 3m55s)  kubelet          Node addons-348433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s (x7 over 3m55s)  kubelet          Node addons-348433 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s                  kubelet          Node addons-348433 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s                  kubelet          Node addons-348433 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s                  kubelet          Node addons-348433 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m36s                  node-controller  Node addons-348433 event: Registered Node addons-348433 in Controller
	  Normal  NodeReady                3m27s                  kubelet          Node addons-348433 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.013665] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.010964] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001270] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.001323] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001276] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.001998] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.002204] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.001285] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.001302] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.001259] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +9.212708] kauditd_printk_skb: 36 callbacks suppressed
	[Sep12 21:45] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 d0 0a 42 19 96 72 d6 3f 8c 45 88 08 00
	[  +1.023418] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d2 d0 0a 42 19 96 72 d6 3f 8c 45 88 08 00
	[  +2.015774] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000021] ll header: 00000000: d2 d0 0a 42 19 96 72 d6 3f 8c 45 88 08 00
	[  +4.255596] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d2 d0 0a 42 19 96 72 d6 3f 8c 45 88 08 00
	[Sep12 21:46] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d2 d0 0a 42 19 96 72 d6 3f 8c 45 88 08 00
	[ +16.130363] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: d2 d0 0a 42 19 96 72 d6 3f 8c 45 88 08 00
	[ +32.764837] IPv4: martian source 10.244.0.17 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d2 d0 0a 42 19 96 72 d6 3f 8c 45 88 08 00
	
	* 
	* ==> etcd [f755fc766a7562598295946a3c41d56b3cd40647a21148ff5b993fe4d0d10cba] <==
	* {"level":"info","ts":"2023-09-12T21:44:38.840959Z","caller":"traceutil/trace.go:171","msg":"trace[193959798] transaction","detail":"{read_only:false; response_revision:467; number_of_response:1; }","duration":"102.187561ms","start":"2023-09-12T21:44:38.738754Z","end":"2023-09-12T21:44:38.840942Z","steps":["trace[193959798] 'process raft request'  (duration: 90.730647ms)","trace[193959798] 'compare'  (duration: 11.20491ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T21:44:42.637822Z","caller":"traceutil/trace.go:171","msg":"trace[1411527342] linearizableReadLoop","detail":"{readStateIndex:705; appliedIndex:704; }","duration":"107.460481ms","start":"2023-09-12T21:44:42.530321Z","end":"2023-09-12T21:44:42.637781Z","steps":["trace[1411527342] 'read index received'  (duration: 9.13648ms)","trace[1411527342] 'applied index is now lower than readState.Index'  (duration: 98.323267ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T21:44:42.834498Z","caller":"traceutil/trace.go:171","msg":"trace[1151394211] transaction","detail":"{read_only:false; response_revision:686; number_of_response:1; }","duration":"304.830095ms","start":"2023-09-12T21:44:42.529642Z","end":"2023-09-12T21:44:42.834472Z","steps":["trace[1151394211] 'compare'  (duration: 98.128765ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T21:44:42.835327Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-12T21:44:42.529625Z","time spent":"305.627922ms","remote":"127.0.0.1:52630","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":724,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns-5dd5756b68-lx8mt.178444de92467315\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns-5dd5756b68-lx8mt.178444de92467315\" value_size:636 lease:8128023758614386617 >> failure:<>"}
	{"level":"warn","ts":"2023-09-12T21:44:42.837045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"396.207019ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/csi-provisioner-role-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-12T21:44:42.837139Z","caller":"traceutil/trace.go:171","msg":"trace[1220272097] range","detail":"{range_begin:/registry/rolebindings/kube-system/csi-provisioner-role-cfg; range_end:; response_count:0; response_revision:686; }","duration":"396.29478ms","start":"2023-09-12T21:44:42.440816Z","end":"2023-09-12T21:44:42.83711Z","steps":["trace[1220272097] 'agreement among raft nodes before linearized reading'  (duration: 197.121809ms)","trace[1220272097] 'range keys from in-memory index tree'  (duration: 199.064099ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-12T21:44:42.841945Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-12T21:44:42.440807Z","time spent":"401.11389ms","remote":"127.0.0.1:52938","response type":"/etcdserverpb.KV/Range","request count":0,"request size":61,"response count":0,"response size":29,"request content":"key:\"/registry/rolebindings/kube-system/csi-provisioner-role-cfg\" "}
	{"level":"warn","ts":"2023-09-12T21:44:42.92177Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"391.884819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-lx8mt\" ","response":"range_response_count:1 size:3818"}
	{"level":"info","ts":"2023-09-12T21:44:42.921849Z","caller":"traceutil/trace.go:171","msg":"trace[1109969629] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-lx8mt; range_end:; response_count:1; response_revision:686; }","duration":"391.969311ms","start":"2023-09-12T21:44:42.529864Z","end":"2023-09-12T21:44:42.921833Z","steps":["trace[1109969629] 'agreement among raft nodes before linearized reading'  (duration: 305.683088ms)","trace[1109969629] 'range keys from in-memory index tree'  (duration: 86.155189ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-12T21:44:42.921901Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-12T21:44:42.529857Z","time spent":"392.033246ms","remote":"127.0.0.1:52734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":3842,"request content":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-lx8mt\" "}
	{"level":"warn","ts":"2023-09-12T21:44:42.92221Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"476.658293ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:15 size:57391"}
	{"level":"info","ts":"2023-09-12T21:44:42.922243Z","caller":"traceutil/trace.go:171","msg":"trace[1309886739] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:15; response_revision:686; }","duration":"476.710147ms","start":"2023-09-12T21:44:42.445524Z","end":"2023-09-12T21:44:42.922234Z","steps":["trace[1309886739] 'agreement among raft nodes before linearized reading'  (duration: 390.037573ms)","trace[1309886739] 'range keys from in-memory index tree'  (duration: 86.514211ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-12T21:44:42.922288Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-12T21:44:42.445513Z","time spent":"476.750072ms","remote":"127.0.0.1:52734","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":15,"response size":57415,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2023-09-12T21:44:42.936019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.120907ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/tiller\" ","response":"range_response_count:1 size:878"}
	{"level":"info","ts":"2023-09-12T21:44:42.936143Z","caller":"traceutil/trace.go:171","msg":"trace[1802110200] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/tiller; range_end:; response_count:1; response_revision:691; }","duration":"100.245156ms","start":"2023-09-12T21:44:42.835886Z","end":"2023-09-12T21:44:42.936131Z","steps":["trace[1802110200] 'agreement among raft nodes before linearized reading'  (duration: 100.098344ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T21:44:42.936577Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.559545ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2023-09-12T21:44:43.025785Z","caller":"traceutil/trace.go:171","msg":"trace[2120960756] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:691; }","duration":"189.762741ms","start":"2023-09-12T21:44:42.836005Z","end":"2023-09-12T21:44:43.025767Z","steps":["trace[2120960756] 'agreement among raft nodes before linearized reading'  (duration: 100.539135ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T21:44:42.937203Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.284435ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/coredns\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2023-09-12T21:44:43.025876Z","caller":"traceutil/trace.go:171","msg":"trace[274938866] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/coredns; range_end:; response_count:1; response_revision:691; }","duration":"189.959485ms","start":"2023-09-12T21:44:42.835909Z","end":"2023-09-12T21:44:43.025869Z","steps":["trace[274938866] 'agreement among raft nodes before linearized reading'  (duration: 101.259945ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T21:44:42.93724Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.298292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:183"}
	{"level":"info","ts":"2023-09-12T21:44:43.025934Z","caller":"traceutil/trace.go:171","msg":"trace[252944314] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:691; }","duration":"189.990493ms","start":"2023-09-12T21:44:42.835938Z","end":"2023-09-12T21:44:43.025928Z","steps":["trace[252944314] 'agreement among raft nodes before linearized reading'  (duration: 101.283741ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T21:44:42.937442Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.524625ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission\" ","response":"range_response_count:1 size:979"}
	{"level":"info","ts":"2023-09-12T21:44:43.026004Z","caller":"traceutil/trace.go:171","msg":"trace[1088872640] range","detail":"{range_begin:/registry/serviceaccounts/ingress-nginx/ingress-nginx-admission; range_end:; response_count:1; response_revision:691; }","duration":"190.085044ms","start":"2023-09-12T21:44:42.83591Z","end":"2023-09-12T21:44:43.025995Z","steps":["trace[1088872640] 'agreement among raft nodes before linearized reading'  (duration: 101.507199ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-12T21:44:43.026182Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.007608ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/snapshot-controller\" ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2023-09-12T21:44:43.026244Z","caller":"traceutil/trace.go:171","msg":"trace[298590115] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/snapshot-controller; range_end:; response_count:1; response_revision:691; }","duration":"190.289843ms","start":"2023-09-12T21:44:42.835944Z","end":"2023-09-12T21:44:43.026234Z","steps":["trace[298590115] 'agreement among raft nodes before linearized reading'  (duration: 99.992714ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [b82a2c9c9e2f82324c66df038aa59749ea39ff938d990ca425b646be2154eed9] <==
	* 2023/09/12 21:45:23 GCP Auth Webhook started!
	2023/09/12 21:45:30 Ready to marshal response ...
	2023/09/12 21:45:30 Ready to write response ...
	2023/09/12 21:45:30 Ready to marshal response ...
	2023/09/12 21:45:30 Ready to write response ...
	2023/09/12 21:45:30 Ready to marshal response ...
	2023/09/12 21:45:30 Ready to write response ...
	2023/09/12 21:45:34 Ready to marshal response ...
	2023/09/12 21:45:34 Ready to write response ...
	2023/09/12 21:45:39 Ready to marshal response ...
	2023/09/12 21:45:39 Ready to write response ...
	2023/09/12 21:45:39 Ready to marshal response ...
	2023/09/12 21:45:39 Ready to write response ...
	2023/09/12 21:45:52 Ready to marshal response ...
	2023/09/12 21:45:52 Ready to write response ...
	2023/09/12 21:46:26 Ready to marshal response ...
	2023/09/12 21:46:26 Ready to write response ...
	2023/09/12 21:47:59 Ready to marshal response ...
	2023/09/12 21:47:59 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  21:48:09 up  1:30,  0 users,  load average: 0.42, 0.64, 0.31
	Linux addons-348433 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [3912baca249c10736751cb68d7d5690e904e9b26da27979a515cb6d399349722] <==
	* I0912 21:46:01.681378       1 main.go:227] handling current node
	I0912 21:46:11.684559       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:46:11.684583       1 main.go:227] handling current node
	I0912 21:46:21.695880       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:46:21.695901       1 main.go:227] handling current node
	I0912 21:46:31.699958       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:46:31.699985       1 main.go:227] handling current node
	I0912 21:46:41.711839       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:46:41.711860       1 main.go:227] handling current node
	I0912 21:46:51.715756       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:46:51.715780       1 main.go:227] handling current node
	I0912 21:47:01.727700       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:47:01.727723       1 main.go:227] handling current node
	I0912 21:47:11.731883       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:47:11.731906       1 main.go:227] handling current node
	I0912 21:47:21.743553       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:47:21.743575       1 main.go:227] handling current node
	I0912 21:47:31.746792       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:47:31.746816       1 main.go:227] handling current node
	I0912 21:47:41.755982       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:47:41.756005       1 main.go:227] handling current node
	I0912 21:47:51.759563       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:47:51.759587       1 main.go:227] handling current node
	I0912 21:48:01.772618       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:48:01.772647       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [36a2899032a3baabecbec1f3973360c9f06fcc2e84f32b467700fdc23d1d1933] <==
	* I0912 21:46:41.952250       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:46:41.952387       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:46:41.960665       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:46:41.960788       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:46:41.977965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:46:41.978045       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:46:42.022002       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:46:42.022132       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:46:42.039788       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:46:42.039918       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0912 21:46:42.046653       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0912 21:46:42.046706       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0912 21:46:42.124364       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0912 21:46:42.124383       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0912 21:46:42.125735       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0912 21:46:42.127284       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0912 21:46:42.961481       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0912 21:46:43.047522       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0912 21:46:43.051442       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0912 21:46:57.234085       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0912 21:46:57.234118       1 handler_proxy.go:93] no RequestInfo found in the context
	E0912 21:46:57.234156       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0912 21:46:57.234164       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0912 21:47:59.377898       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.103.35.252"}
	
	* 
	* ==> kube-controller-manager [2cf8beb4e0dd450ae0bc63503e1dd707eb43845a1bab3aeb92b16aaab0e8da18] <==
	* E0912 21:47:20.093051       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0912 21:47:29.358074       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:47:29.358100       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0912 21:47:30.558293       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:47:30.558319       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0912 21:47:59.221580       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0912 21:47:59.234339       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-l8dkw"
	I0912 21:47:59.239510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="18.124358ms"
	I0912 21:47:59.243527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="3.907521ms"
	I0912 21:47:59.243615       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="48.998µs"
	I0912 21:47:59.243705       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.111µs"
	I0912 21:47:59.249613       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="38.531µs"
	I0912 21:48:00.844819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="5.493161ms"
	I0912 21:48:00.844911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="45.428µs"
	I0912 21:48:01.333576       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0912 21:48:01.335186       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="5.207µs"
	I0912 21:48:01.338092       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W0912 21:48:03.151681       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:48:03.151707       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0912 21:48:05.595427       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:48:05.595455       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0912 21:48:06.541300       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:48:06.541330       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0912 21:48:07.707836       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0912 21:48:07.707867       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	* 
	* ==> kube-proxy [d9b096f614a6f144dc45a35ac61715eb16950fe50d987560f4bca2146138b433] <==
	* I0912 21:44:37.047044       1 server_others.go:69] "Using iptables proxy"
	I0912 21:44:37.325227       1 node.go:141] Successfully retrieved node IP: 192.168.49.2
	I0912 21:44:37.936114       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 21:44:37.938857       1 server_others.go:152] "Using iptables Proxier"
	I0912 21:44:37.938943       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0912 21:44:37.938971       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0912 21:44:37.939029       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 21:44:37.939245       1 server.go:846] "Version info" version="v1.28.1"
	I0912 21:44:37.939484       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 21:44:37.940427       1 config.go:188] "Starting service config controller"
	I0912 21:44:37.940505       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 21:44:37.940582       1 config.go:97] "Starting endpoint slice config controller"
	I0912 21:44:37.940628       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 21:44:37.941192       1 config.go:315] "Starting node config controller"
	I0912 21:44:37.941253       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 21:44:38.040846       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0912 21:44:38.040981       1 shared_informer.go:318] Caches are synced for service config
	I0912 21:44:38.042291       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [d4262472c4df7bc6bf1228bb337e267af8a630bddf50cf18037fc7ed3a7d5a2d] <==
	* W0912 21:44:17.629172       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:44:17.629222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0912 21:44:17.629736       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:44:17.629763       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0912 21:44:17.632127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0912 21:44:17.632187       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:44:17.632209       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 21:44:17.632195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0912 21:44:17.632330       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 21:44:17.632376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0912 21:44:17.632346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 21:44:17.632412       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0912 21:44:17.632451       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:44:17.632489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0912 21:44:17.632581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 21:44:17.632608       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0912 21:44:18.498438       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:44:18.498467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0912 21:44:18.557002       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:44:18.557050       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0912 21:44:18.578193       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:44:18.578230       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0912 21:44:18.600292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 21:44:18.600324       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0912 21:44:20.624236       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 12 21:47:59 addons-348433 kubelet[1557]: I0912 21:47:59.398527    1557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0bf277c9-98ad-4bdd-a3ee-b44a4274a2af-gcp-creds\") pod \"hello-world-app-5d77478584-l8dkw\" (UID: \"0bf277c9-98ad-4bdd-a3ee-b44a4274a2af\") " pod="default/hello-world-app-5d77478584-l8dkw"
	Sep 12 21:47:59 addons-348433 kubelet[1557]: I0912 21:47:59.398592    1557 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkmph\" (UniqueName: \"kubernetes.io/projected/0bf277c9-98ad-4bdd-a3ee-b44a4274a2af-kube-api-access-qkmph\") pod \"hello-world-app-5d77478584-l8dkw\" (UID: \"0bf277c9-98ad-4bdd-a3ee-b44a4274a2af\") " pod="default/hello-world-app-5d77478584-l8dkw"
	Sep 12 21:47:59 addons-348433 kubelet[1557]: W0912 21:47:59.645471    1557 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/c4161df61a631bc2b014ff180c9aee9cfdcc4df8637041cee1945bdc8271aa8d/crio-e69d66fce4ebe37140573cba41cdb5fd484d02977e99b1a609d79e0991796bda WatchSource:0}: Error finding container e69d66fce4ebe37140573cba41cdb5fd484d02977e99b1a609d79e0991796bda: Status 404 returned error can't find the container with id e69d66fce4ebe37140573cba41cdb5fd484d02977e99b1a609d79e0991796bda
	Sep 12 21:48:00 addons-348433 kubelet[1557]: I0912 21:48:00.523282    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtk5l\" (UniqueName: \"kubernetes.io/projected/320aeba6-b9c0-4415-835e-f265a0278621-kube-api-access-vtk5l\") pod \"320aeba6-b9c0-4415-835e-f265a0278621\" (UID: \"320aeba6-b9c0-4415-835e-f265a0278621\") "
	Sep 12 21:48:00 addons-348433 kubelet[1557]: I0912 21:48:00.524998    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320aeba6-b9c0-4415-835e-f265a0278621-kube-api-access-vtk5l" (OuterVolumeSpecName: "kube-api-access-vtk5l") pod "320aeba6-b9c0-4415-835e-f265a0278621" (UID: "320aeba6-b9c0-4415-835e-f265a0278621"). InnerVolumeSpecName "kube-api-access-vtk5l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:48:00 addons-348433 kubelet[1557]: I0912 21:48:00.624129    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vtk5l\" (UniqueName: \"kubernetes.io/projected/320aeba6-b9c0-4415-835e-f265a0278621-kube-api-access-vtk5l\") on node \"addons-348433\" DevicePath \"\""
	Sep 12 21:48:00 addons-348433 kubelet[1557]: I0912 21:48:00.817809    1557 scope.go:117] "RemoveContainer" containerID="6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3"
	Sep 12 21:48:00 addons-348433 kubelet[1557]: I0912 21:48:00.833618    1557 scope.go:117] "RemoveContainer" containerID="6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3"
	Sep 12 21:48:00 addons-348433 kubelet[1557]: E0912 21:48:00.834036    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3\": container with ID starting with 6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3 not found: ID does not exist" containerID="6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3"
	Sep 12 21:48:00 addons-348433 kubelet[1557]: I0912 21:48:00.834078    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3"} err="failed to get container status \"6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3\": rpc error: code = NotFound desc = could not find container \"6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3\": container with ID starting with 6964d45044347938a29cbf5c27f4a5fd2e5537ef6417565a9801bd1f81ec24d3 not found: ID does not exist"
	Sep 12 21:48:00 addons-348433 kubelet[1557]: I0912 21:48:00.839243    1557 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-l8dkw" podStartSLOduration=1.123896646 podCreationTimestamp="2023-09-12 21:47:59 +0000 UTC" firstStartedPulling="2023-09-12 21:47:59.648513589 +0000 UTC m=+219.245753312" lastFinishedPulling="2023-09-12 21:48:00.363823531 +0000 UTC m=+219.961063257" observedRunningTime="2023-09-12 21:48:00.838848362 +0000 UTC m=+220.436088105" watchObservedRunningTime="2023-09-12 21:48:00.839206591 +0000 UTC m=+220.436446333"
	Sep 12 21:48:02 addons-348433 kubelet[1557]: I0912 21:48:02.480485    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="320aeba6-b9c0-4415-835e-f265a0278621" path="/var/lib/kubelet/pods/320aeba6-b9c0-4415-835e-f265a0278621/volumes"
	Sep 12 21:48:02 addons-348433 kubelet[1557]: I0912 21:48:02.481103    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="869b200f-5dc9-4fb3-9d2d-df1db65ca00b" path="/var/lib/kubelet/pods/869b200f-5dc9-4fb3-9d2d-df1db65ca00b/volumes"
	Sep 12 21:48:02 addons-348433 kubelet[1557]: I0912 21:48:02.481491    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="9424fad7-e7c5-4e78-b120-3a403b1c3c72" path="/var/lib/kubelet/pods/9424fad7-e7c5-4e78-b120-3a403b1c3c72/volumes"
	Sep 12 21:48:04 addons-348433 kubelet[1557]: I0912 21:48:04.648210    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea1a7973-c383-4e44-846d-1b18dfaf6d6f-webhook-cert\") pod \"ea1a7973-c383-4e44-846d-1b18dfaf6d6f\" (UID: \"ea1a7973-c383-4e44-846d-1b18dfaf6d6f\") "
	Sep 12 21:48:04 addons-348433 kubelet[1557]: I0912 21:48:04.648257    1557 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhhcj\" (UniqueName: \"kubernetes.io/projected/ea1a7973-c383-4e44-846d-1b18dfaf6d6f-kube-api-access-xhhcj\") pod \"ea1a7973-c383-4e44-846d-1b18dfaf6d6f\" (UID: \"ea1a7973-c383-4e44-846d-1b18dfaf6d6f\") "
	Sep 12 21:48:04 addons-348433 kubelet[1557]: I0912 21:48:04.650058    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1a7973-c383-4e44-846d-1b18dfaf6d6f-kube-api-access-xhhcj" (OuterVolumeSpecName: "kube-api-access-xhhcj") pod "ea1a7973-c383-4e44-846d-1b18dfaf6d6f" (UID: "ea1a7973-c383-4e44-846d-1b18dfaf6d6f"). InnerVolumeSpecName "kube-api-access-xhhcj". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 21:48:04 addons-348433 kubelet[1557]: I0912 21:48:04.650120    1557 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1a7973-c383-4e44-846d-1b18dfaf6d6f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "ea1a7973-c383-4e44-846d-1b18dfaf6d6f" (UID: "ea1a7973-c383-4e44-846d-1b18dfaf6d6f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 12 21:48:04 addons-348433 kubelet[1557]: I0912 21:48:04.748541    1557 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ea1a7973-c383-4e44-846d-1b18dfaf6d6f-webhook-cert\") on node \"addons-348433\" DevicePath \"\""
	Sep 12 21:48:04 addons-348433 kubelet[1557]: I0912 21:48:04.748577    1557 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xhhcj\" (UniqueName: \"kubernetes.io/projected/ea1a7973-c383-4e44-846d-1b18dfaf6d6f-kube-api-access-xhhcj\") on node \"addons-348433\" DevicePath \"\""
	Sep 12 21:48:04 addons-348433 kubelet[1557]: I0912 21:48:04.829422    1557 scope.go:117] "RemoveContainer" containerID="17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9"
	Sep 12 21:48:04 addons-348433 kubelet[1557]: I0912 21:48:04.845258    1557 scope.go:117] "RemoveContainer" containerID="17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9"
	Sep 12 21:48:04 addons-348433 kubelet[1557]: E0912 21:48:04.845555    1557 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9\": container with ID starting with 17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9 not found: ID does not exist" containerID="17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9"
	Sep 12 21:48:04 addons-348433 kubelet[1557]: I0912 21:48:04.845608    1557 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9"} err="failed to get container status \"17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9\": rpc error: code = NotFound desc = could not find container \"17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9\": container with ID starting with 17aefbcceb43d7b79a46647aab736979031fbf887b7d117803c704f8bf631ff9 not found: ID does not exist"
	Sep 12 21:48:06 addons-348433 kubelet[1557]: I0912 21:48:06.480243    1557 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="ea1a7973-c383-4e44-846d-1b18dfaf6d6f" path="/var/lib/kubelet/pods/ea1a7973-c383-4e44-846d-1b18dfaf6d6f/volumes"
	
	* 
	* ==> storage-provisioner [c4469b0779f43dfdadc44a7f870effe8c95476dc1deaba686082f6e9653c230c] <==
	* I0912 21:44:45.628549       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:44:45.736496       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:44:45.736577       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:44:45.923529       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:44:45.923808       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-348433_43664f03-631f-4e77-9e8c-db1a70a2f9e9!
	I0912 21:44:45.924956       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"04451a9e-b253-4e8c-bd77-7e28ebe22ed2", APIVersion:"v1", ResourceVersion:"786", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-348433_43664f03-631f-4e77-9e8c-db1a70a2f9e9 became leader
	I0912 21:44:46.026025       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-348433_43664f03-631f-4e77-9e8c-db1a70a2f9e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-348433 -n addons-348433
helpers_test.go:261: (dbg) Run:  kubectl --context addons-348433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (151.25s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (182.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-704515 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-704515 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (17.120757042s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-704515 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-704515 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a8ae34c9-87c7-4960-ba52-b4b74f527db5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a8ae34c9-87c7-4960-ba52-b4b74f527db5] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 8.007524411s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-704515 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0912 21:55:29.800956   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-704515 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.075981607s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-704515 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-704515 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.49.2
E0912 21:55:57.484008   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:56:06.482308   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:06.487576   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:06.497814   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:06.518071   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:06.558339   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:06.638634   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:06.799018   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:07.119627   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
addons_test.go:273: (dbg) Non-zero exit: nslookup hello-john.test 192.168.49.2: exit status 1 (15.006864272s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
addons_test.go:275: failed to nslookup hello-john.test host. args "nslookup hello-john.test 192.168.49.2" : exit status 1
addons_test.go:279: unexpected output from nslookup. stdout: ;; connection timed out; no servers could be reached

                                                
                                                

                                                
                                                

                                                
                                                
stderr: 
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-704515 addons disable ingress-dns --alsologtostderr -v=1
E0912 21:56:07.760397   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:09.041156   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-704515 addons disable ingress-dns --alsologtostderr -v=1: (2.250476227s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-704515 addons disable ingress --alsologtostderr -v=1
E0912 21:56:11.603002   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:16.724105   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-704515 addons disable ingress --alsologtostderr -v=1: (7.395657668s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-704515
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-704515:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "99af414f51e50fd4903df83e2a93b58e8af3b0d6eb0cd9e0dfc676dd78025f24",
	        "Created": "2023-09-12T21:52:08.501354188Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 61068,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-12T21:52:08.765818095Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0508862d812894c98deaaf3533e6d3386b479df1d249d4410a6247f1f44ad45d",
	        "ResolvConfPath": "/var/lib/docker/containers/99af414f51e50fd4903df83e2a93b58e8af3b0d6eb0cd9e0dfc676dd78025f24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/99af414f51e50fd4903df83e2a93b58e8af3b0d6eb0cd9e0dfc676dd78025f24/hostname",
	        "HostsPath": "/var/lib/docker/containers/99af414f51e50fd4903df83e2a93b58e8af3b0d6eb0cd9e0dfc676dd78025f24/hosts",
	        "LogPath": "/var/lib/docker/containers/99af414f51e50fd4903df83e2a93b58e8af3b0d6eb0cd9e0dfc676dd78025f24/99af414f51e50fd4903df83e2a93b58e8af3b0d6eb0cd9e0dfc676dd78025f24-json.log",
	        "Name": "/ingress-addon-legacy-704515",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-704515:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ingress-addon-legacy-704515",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/611879a1d948b539b3211e387deb61d3dfa34b765346081bb26cdb7b79721202-init/diff:/var/lib/docker/overlay2/27d59bddd44498ba277aabbca5bbef44e363739d94cbe3a544670a142640c048/diff",
	                "MergedDir": "/var/lib/docker/overlay2/611879a1d948b539b3211e387deb61d3dfa34b765346081bb26cdb7b79721202/merged",
	                "UpperDir": "/var/lib/docker/overlay2/611879a1d948b539b3211e387deb61d3dfa34b765346081bb26cdb7b79721202/diff",
	                "WorkDir": "/var/lib/docker/overlay2/611879a1d948b539b3211e387deb61d3dfa34b765346081bb26cdb7b79721202/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-704515",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-704515/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-704515",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-704515",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-704515",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1c7d650a3bc298ad9650fbc32bb2359742c2ae4098169bd07f33d020b0772584",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1c7d650a3bc2",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-704515": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "99af414f51e5",
	                        "ingress-addon-legacy-704515"
	                    ],
	                    "NetworkID": "533dd3489399f3d352bda98474dfb33e64c88e106fbb4e56ba9965681ba0581f",
	                    "EndpointID": "08218f68866b9df5ee6be17bda3877b0fa7a94d0e5f0212aa4fc6f2840531ea8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-704515 -n ingress-addon-legacy-704515
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-704515 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-704515 logs -n 25: (1.008752288s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                 Args                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-728577 ssh sudo cat        | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | /etc/ssl/certs/22698.pem              |                             |         |         |                     |                     |
	| service        | functional-728577 service             | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | hello-node --url                      |                             |         |         |                     |                     |
	| ssh            | functional-728577 ssh sudo cat        | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | /usr/share/ca-certificates/22698.pem  |                             |         |         |                     |                     |
	| ssh            | functional-728577 ssh sudo cat        | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | /etc/ssl/certs/51391683.0             |                             |         |         |                     |                     |
	| ssh            | functional-728577 ssh sudo cat        | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | /etc/ssl/certs/226982.pem             |                             |         |         |                     |                     |
	| ssh            | functional-728577 ssh sudo cat        | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | /usr/share/ca-certificates/226982.pem |                             |         |         |                     |                     |
	| ssh            | functional-728577 ssh sudo cat        | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0             |                             |         |         |                     |                     |
	| image          | functional-728577                     | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | image ls --format short               |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	| image          | functional-728577                     | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | image ls --format yaml                |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	| ssh            | functional-728577 ssh pgrep           | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC |                     |
	|                | buildkitd                             |                             |         |         |                     |                     |
	| image          | functional-728577 image build -t      | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | localhost/my-image:functional-728577  |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr      |                             |         |         |                     |                     |
	| image          | functional-728577                     | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | image ls --format json                |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	| image          | functional-728577                     | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | image ls --format table               |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	| update-context | functional-728577                     | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | update-context                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                             |         |         |                     |                     |
	| update-context | functional-728577                     | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | update-context                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                             |         |         |                     |                     |
	| update-context | functional-728577                     | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	|                | update-context                        |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                |                             |         |         |                     |                     |
	| image          | functional-728577 image ls            | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	| delete         | -p functional-728577                  | functional-728577           | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:51 UTC |
	| start          | -p ingress-addon-legacy-704515        | ingress-addon-legacy-704515 | jenkins | v1.31.2 | 12 Sep 23 21:51 UTC | 12 Sep 23 21:53 UTC |
	|                | --kubernetes-version=v1.18.20         |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true             |                             |         |         |                     |                     |
	|                | --alsologtostderr                     |                             |         |         |                     |                     |
	|                | -v=5 --driver=docker                  |                             |         |         |                     |                     |
	|                | --container-runtime=crio              |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-704515           | ingress-addon-legacy-704515 | jenkins | v1.31.2 | 12 Sep 23 21:53 UTC | 12 Sep 23 21:53 UTC |
	|                | addons enable ingress                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-704515           | ingress-addon-legacy-704515 | jenkins | v1.31.2 | 12 Sep 23 21:53 UTC | 12 Sep 23 21:53 UTC |
	|                | addons enable ingress-dns             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-704515           | ingress-addon-legacy-704515 | jenkins | v1.31.2 | 12 Sep 23 21:53 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/         |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'          |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-704515 ip        | ingress-addon-legacy-704515 | jenkins | v1.31.2 | 12 Sep 23 21:55 UTC | 12 Sep 23 21:55 UTC |
	| addons         | ingress-addon-legacy-704515           | ingress-addon-legacy-704515 | jenkins | v1.31.2 | 12 Sep 23 21:56 UTC | 12 Sep 23 21:56 UTC |
	|                | addons disable ingress-dns            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-704515           | ingress-addon-legacy-704515 | jenkins | v1.31.2 | 12 Sep 23 21:56 UTC | 12 Sep 23 21:56 UTC |
	|                | addons disable ingress                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                |                             |         |         |                     |                     |
	|----------------|---------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 21:51:57
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:51:57.071791   60455 out.go:296] Setting OutFile to fd 1 ...
	I0912 21:51:57.071927   60455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:51:57.071937   60455 out.go:309] Setting ErrFile to fd 2...
	I0912 21:51:57.071942   60455 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:51:57.072119   60455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 21:51:57.072740   60455 out.go:303] Setting JSON to false
	I0912 21:51:57.073638   60455 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5665,"bootTime":1694549852,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:51:57.073701   60455 start.go:138] virtualization: kvm guest
	I0912 21:51:57.075659   60455 out.go:177] * [ingress-addon-legacy-704515] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:51:57.077368   60455 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 21:51:57.078746   60455 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:51:57.077381   60455 notify.go:220] Checking for updates...
	I0912 21:51:57.080241   60455 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 21:51:57.081889   60455 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 21:51:57.083211   60455 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:51:57.084466   60455 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:51:57.085821   60455 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 21:51:57.107156   60455 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 21:51:57.107248   60455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:51:57.159262   60455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-09-12 21:51:57.15109785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archit
ecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> S
erverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:51:57.159366   60455 docker.go:294] overlay module found
	I0912 21:51:57.161046   60455 out.go:177] * Using the docker driver based on user configuration
	I0912 21:51:57.162249   60455 start.go:298] selected driver: docker
	I0912 21:51:57.162259   60455 start.go:902] validating driver "docker" against <nil>
	I0912 21:51:57.162268   60455 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:51:57.163007   60455 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:51:57.215711   60455 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:37 SystemTime:2023-09-12 21:51:57.206891071 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:51:57.215901   60455 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 21:51:57.216174   60455 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 21:51:57.217663   60455 out.go:177] * Using Docker driver with root privileges
	I0912 21:51:57.218863   60455 cni.go:84] Creating CNI manager for ""
	I0912 21:51:57.218893   60455 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 21:51:57.218909   60455 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 21:51:57.218922   60455 start_flags.go:321] config:
	{Name:ingress-addon-legacy-704515 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-704515 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 21:51:57.220334   60455 out.go:177] * Starting control plane node ingress-addon-legacy-704515 in cluster ingress-addon-legacy-704515
	I0912 21:51:57.221515   60455 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 21:51:57.222800   60455 out.go:177] * Pulling base image ...
	I0912 21:51:57.223911   60455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0912 21:51:57.223934   60455 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 21:51:57.239341   60455 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0912 21:51:57.239364   60455 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	I0912 21:51:57.259414   60455 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0912 21:51:57.259443   60455 cache.go:57] Caching tarball of preloaded images
	I0912 21:51:57.259593   60455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0912 21:51:57.261177   60455 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0912 21:51:57.262376   60455 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:51:57.295679   60455 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0912 21:52:00.255841   60455 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:52:00.255953   60455 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:52:01.262849   60455 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0912 21:52:01.263226   60455 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/config.json ...
	I0912 21:52:01.263267   60455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/config.json: {Name:mk3aaaeb1e0d580c1915bca863d616a951881723 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:01.263446   60455 cache.go:195] Successfully downloaded all kic artifacts
	I0912 21:52:01.263474   60455 start.go:365] acquiring machines lock for ingress-addon-legacy-704515: {Name:mkeb22f053185230189d84f4a2c62784807f873b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 21:52:01.263531   60455 start.go:369] acquired machines lock for "ingress-addon-legacy-704515" in 44.277µs
	I0912 21:52:01.263563   60455 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-704515 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-704515 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Dis
ableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:52:01.263654   60455 start.go:125] createHost starting for "" (driver="docker")
	I0912 21:52:01.265661   60455 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0912 21:52:01.265932   60455 start.go:159] libmachine.API.Create for "ingress-addon-legacy-704515" (driver="docker")
	I0912 21:52:01.265967   60455 client.go:168] LocalClient.Create starting
	I0912 21:52:01.266057   60455 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem
	I0912 21:52:01.266104   60455 main.go:141] libmachine: Decoding PEM data...
	I0912 21:52:01.266128   60455 main.go:141] libmachine: Parsing certificate...
	I0912 21:52:01.266190   60455 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem
	I0912 21:52:01.266219   60455 main.go:141] libmachine: Decoding PEM data...
	I0912 21:52:01.266238   60455 main.go:141] libmachine: Parsing certificate...
	I0912 21:52:01.266547   60455 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-704515 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 21:52:01.282063   60455 cli_runner.go:211] docker network inspect ingress-addon-legacy-704515 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 21:52:01.282143   60455 network_create.go:281] running [docker network inspect ingress-addon-legacy-704515] to gather additional debugging logs...
	I0912 21:52:01.282166   60455 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-704515
	W0912 21:52:01.297007   60455 cli_runner.go:211] docker network inspect ingress-addon-legacy-704515 returned with exit code 1
	I0912 21:52:01.297036   60455 network_create.go:284] error running [docker network inspect ingress-addon-legacy-704515]: docker network inspect ingress-addon-legacy-704515: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ingress-addon-legacy-704515 not found
	I0912 21:52:01.297050   60455 network_create.go:286] output of [docker network inspect ingress-addon-legacy-704515]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ingress-addon-legacy-704515 not found
	
	** /stderr **
	I0912 21:52:01.297102   60455 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:52:01.312652   60455 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000a23130}
	I0912 21:52:01.312694   60455 network_create.go:123] attempt to create docker network ingress-addon-legacy-704515 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0912 21:52:01.312746   60455 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-704515 ingress-addon-legacy-704515
	I0912 21:52:01.361735   60455 network_create.go:107] docker network ingress-addon-legacy-704515 192.168.49.0/24 created
	I0912 21:52:01.361765   60455 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-704515" container
	I0912 21:52:01.361819   60455 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 21:52:01.376290   60455 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-704515 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-704515 --label created_by.minikube.sigs.k8s.io=true
	I0912 21:52:01.392080   60455 oci.go:103] Successfully created a docker volume ingress-addon-legacy-704515
	I0912 21:52:01.392147   60455 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-704515-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-704515 --entrypoint /usr/bin/test -v ingress-addon-legacy-704515:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib
	I0912 21:52:03.147249   60455 cli_runner.go:217] Completed: docker run --rm --name ingress-addon-legacy-704515-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-704515 --entrypoint /usr/bin/test -v ingress-addon-legacy-704515:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib: (1.755038051s)
	I0912 21:52:03.147281   60455 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-704515
	I0912 21:52:03.147298   60455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0912 21:52:03.147317   60455 kic.go:190] Starting extracting preloaded images to volume ...
	I0912 21:52:03.147369   60455 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-704515:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 21:52:08.438792   60455 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-704515:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir: (5.291382242s)
	I0912 21:52:08.438821   60455 kic.go:199] duration metric: took 5.291502 seconds to extract preloaded images to volume
	W0912 21:52:08.438934   60455 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0912 21:52:08.439020   60455 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 21:52:08.487105   60455 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-704515 --name ingress-addon-legacy-704515 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-704515 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-704515 --network ingress-addon-legacy-704515 --ip 192.168.49.2 --volume ingress-addon-legacy-704515:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0912 21:52:08.772896   60455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-704515 --format={{.State.Running}}
	I0912 21:52:08.789379   60455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-704515 --format={{.State.Status}}
	I0912 21:52:08.806068   60455 cli_runner.go:164] Run: docker exec ingress-addon-legacy-704515 stat /var/lib/dpkg/alternatives/iptables
	I0912 21:52:08.851681   60455 oci.go:144] the created container "ingress-addon-legacy-704515" has a running status.
	I0912 21:52:08.851714   60455 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa...
	I0912 21:52:09.015156   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0912 21:52:09.015218   60455 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 21:52:09.036054   60455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-704515 --format={{.State.Status}}
	I0912 21:52:09.056897   60455 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 21:52:09.056917   60455 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-704515 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 21:52:09.118545   60455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-704515 --format={{.State.Status}}
	I0912 21:52:09.136402   60455 machine.go:88] provisioning docker machine ...
	I0912 21:52:09.136434   60455 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-704515"
	I0912 21:52:09.136481   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:09.153444   60455 main.go:141] libmachine: Using SSH client type: native
	I0912 21:52:09.153827   60455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0912 21:52:09.153845   60455 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-704515 && echo "ingress-addon-legacy-704515" | sudo tee /etc/hostname
	I0912 21:52:09.154443   60455 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58304->127.0.0.1:32787: read: connection reset by peer
	I0912 21:52:12.297984   60455 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-704515
	
	I0912 21:52:12.298049   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:12.313901   60455 main.go:141] libmachine: Using SSH client type: native
	I0912 21:52:12.314230   60455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0912 21:52:12.314252   60455 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-704515' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-704515/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-704515' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 21:52:12.448479   60455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 21:52:12.448516   60455 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17194-15878/.minikube CaCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17194-15878/.minikube}
	I0912 21:52:12.448545   60455 ubuntu.go:177] setting up certificates
	I0912 21:52:12.448563   60455 provision.go:83] configureAuth start
	I0912 21:52:12.448646   60455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-704515
	I0912 21:52:12.464850   60455 provision.go:138] copyHostCerts
	I0912 21:52:12.464888   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 21:52:12.464924   60455 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem, removing ...
	I0912 21:52:12.464934   60455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 21:52:12.464994   60455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem (1123 bytes)
	I0912 21:52:12.465059   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 21:52:12.465075   60455 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem, removing ...
	I0912 21:52:12.465081   60455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 21:52:12.465106   60455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem (1679 bytes)
	I0912 21:52:12.465146   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 21:52:12.465163   60455 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem, removing ...
	I0912 21:52:12.465171   60455 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 21:52:12.465190   60455 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem (1082 bytes)
	I0912 21:52:12.465237   60455 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-704515 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-704515]
	I0912 21:52:12.544224   60455 provision.go:172] copyRemoteCerts
	I0912 21:52:12.544279   60455 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 21:52:12.544312   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:12.560576   60455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa Username:docker}
	I0912 21:52:12.656524   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 21:52:12.656582   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 21:52:12.677119   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 21:52:12.677176   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0912 21:52:12.697119   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 21:52:12.697185   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0912 21:52:12.716842   60455 provision.go:86] duration metric: configureAuth took 268.263504ms
	I0912 21:52:12.716868   60455 ubuntu.go:193] setting minikube options for container-runtime
	I0912 21:52:12.717042   60455 config.go:182] Loaded profile config "ingress-addon-legacy-704515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0912 21:52:12.717136   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:12.733285   60455 main.go:141] libmachine: Using SSH client type: native
	I0912 21:52:12.733603   60455 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0912 21:52:12.733623   60455 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 21:52:12.973687   60455 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 21:52:12.973708   60455 machine.go:91] provisioned docker machine in 3.837286677s
	I0912 21:52:12.973718   60455 client.go:171] LocalClient.Create took 11.7077443s
	I0912 21:52:12.973731   60455 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-704515" took 11.707798884s
	I0912 21:52:12.973740   60455 start.go:300] post-start starting for "ingress-addon-legacy-704515" (driver="docker")
	I0912 21:52:12.973751   60455 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 21:52:12.973810   60455 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 21:52:12.973859   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:12.990315   60455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa Username:docker}
	I0912 21:52:13.084923   60455 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 21:52:13.087759   60455 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 21:52:13.087788   60455 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 21:52:13.087796   60455 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 21:52:13.087805   60455 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0912 21:52:13.087816   60455 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/addons for local assets ...
	I0912 21:52:13.087868   60455 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/files for local assets ...
	I0912 21:52:13.087933   60455 filesync.go:149] local asset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> 226982.pem in /etc/ssl/certs
	I0912 21:52:13.087942   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> /etc/ssl/certs/226982.pem
	I0912 21:52:13.088023   60455 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 21:52:13.095269   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /etc/ssl/certs/226982.pem (1708 bytes)
	I0912 21:52:13.115489   60455 start.go:303] post-start completed in 141.736231ms
	I0912 21:52:13.115794   60455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-704515
	I0912 21:52:13.131331   60455 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/config.json ...
	I0912 21:52:13.131546   60455 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 21:52:13.131582   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:13.147084   60455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa Username:docker}
	I0912 21:52:13.237109   60455 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 21:52:13.240957   60455 start.go:128] duration metric: createHost completed in 11.977288997s
	I0912 21:52:13.240978   60455 start.go:83] releasing machines lock for "ingress-addon-legacy-704515", held for 11.977433342s
	I0912 21:52:13.241026   60455 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-704515
	I0912 21:52:13.257040   60455 ssh_runner.go:195] Run: cat /version.json
	I0912 21:52:13.257091   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:13.257112   60455 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 21:52:13.257183   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:13.273583   60455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa Username:docker}
	I0912 21:52:13.274048   60455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa Username:docker}
	I0912 21:52:13.451476   60455 ssh_runner.go:195] Run: systemctl --version
	I0912 21:52:13.455542   60455 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 21:52:13.591296   60455 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 21:52:13.595275   60455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:52:13.612034   60455 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0912 21:52:13.612111   60455 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 21:52:13.637159   60455 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0912 21:52:13.637185   60455 start.go:469] detecting cgroup driver to use...
	I0912 21:52:13.637223   60455 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0912 21:52:13.637296   60455 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 21:52:13.650459   60455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 21:52:13.660175   60455 docker.go:196] disabling cri-docker service (if available) ...
	I0912 21:52:13.660221   60455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 21:52:13.671851   60455 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 21:52:13.684478   60455 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 21:52:13.763873   60455 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 21:52:13.840262   60455 docker.go:212] disabling docker service ...
	I0912 21:52:13.840315   60455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 21:52:13.856865   60455 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 21:52:13.866713   60455 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 21:52:13.939678   60455 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 21:52:14.015951   60455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 21:52:14.026254   60455 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 21:52:14.039875   60455 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 21:52:14.039922   60455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:52:14.048046   60455 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 21:52:14.048107   60455 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:52:14.056159   60455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:52:14.064176   60455 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 21:52:14.072259   60455 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 21:52:14.079661   60455 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 21:52:14.086653   60455 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 21:52:14.093699   60455 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 21:52:14.171080   60455 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 21:52:14.280160   60455 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 21:52:14.280231   60455 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 21:52:14.283359   60455 start.go:537] Will wait 60s for crictl version
	I0912 21:52:14.283408   60455 ssh_runner.go:195] Run: which crictl
	I0912 21:52:14.286342   60455 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 21:52:14.317390   60455 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0912 21:52:14.317468   60455 ssh_runner.go:195] Run: crio --version
	I0912 21:52:14.348667   60455 ssh_runner.go:195] Run: crio --version
	I0912 21:52:14.382972   60455 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.6 ...
	I0912 21:52:14.384354   60455 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-704515 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 21:52:14.399767   60455 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0912 21:52:14.403030   60455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:52:14.412477   60455 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0912 21:52:14.412526   60455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:52:14.453722   60455 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0912 21:52:14.453782   60455 ssh_runner.go:195] Run: which lz4
	I0912 21:52:14.456921   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0912 21:52:14.457014   60455 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0912 21:52:14.459980   60455 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0912 21:52:14.460012   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0912 21:52:15.380508   60455 crio.go:444] Took 0.923513 seconds to copy over tarball
	I0912 21:52:15.380571   60455 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0912 21:52:17.578499   60455 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.197894241s)
	I0912 21:52:17.578535   60455 crio.go:451] Took 2.197999 seconds to extract the tarball
	I0912 21:52:17.578550   60455 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0912 21:52:17.645242   60455 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 21:52:17.676135   60455 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0912 21:52:17.676157   60455 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0912 21:52:17.676205   60455 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:52:17.676239   60455 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0912 21:52:17.676268   60455 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 21:52:17.676290   60455 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0912 21:52:17.676272   60455 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0912 21:52:17.676291   60455 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0912 21:52:17.676277   60455 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0912 21:52:17.676209   60455 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0912 21:52:17.677198   60455 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0912 21:52:17.677215   60455 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:52:17.677280   60455 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 21:52:17.677294   60455 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0912 21:52:17.677310   60455 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 21:52:17.677315   60455 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0912 21:52:17.677370   60455 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0912 21:52:17.677490   60455 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0912 21:52:17.832156   60455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0912 21:52:17.833393   60455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0912 21:52:17.842944   60455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0912 21:52:17.845174   60455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0912 21:52:17.851112   60455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 21:52:17.851612   60455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0912 21:52:17.855557   60455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0912 21:52:17.927511   60455 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0912 21:52:17.927583   60455 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0912 21:52:17.927653   60455 ssh_runner.go:195] Run: which crictl
	I0912 21:52:17.927438   60455 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0912 21:52:17.927779   60455 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0912 21:52:17.927820   60455 ssh_runner.go:195] Run: which crictl
	I0912 21:52:17.937670   60455 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0912 21:52:17.937711   60455 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0912 21:52:17.937776   60455 ssh_runner.go:195] Run: which crictl
	I0912 21:52:17.950961   60455 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0912 21:52:17.950999   60455 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0912 21:52:17.951020   60455 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0912 21:52:17.951034   60455 ssh_runner.go:195] Run: which crictl
	I0912 21:52:17.951052   60455 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 21:52:17.951090   60455 ssh_runner.go:195] Run: which crictl
	I0912 21:52:17.951105   60455 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0912 21:52:17.951130   60455 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0912 21:52:17.951172   60455 ssh_runner.go:195] Run: which crictl
	I0912 21:52:17.954112   60455 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0912 21:52:17.954141   60455 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0912 21:52:17.954181   60455 ssh_runner.go:195] Run: which crictl
	I0912 21:52:17.954182   60455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0912 21:52:17.954251   60455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0912 21:52:17.954297   60455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0912 21:52:17.954737   60455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0912 21:52:17.954781   60455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0912 21:52:17.956687   60455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0912 21:52:18.054633   60455 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:52:18.140579   60455 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0912 21:52:18.140744   60455 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0912 21:52:18.140768   60455 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0912 21:52:18.140799   60455 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0912 21:52:18.140819   60455 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0912 21:52:18.140823   60455 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0912 21:52:18.140903   60455 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0912 21:52:18.262074   60455 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 21:52:18.262129   60455 cache_images.go:92] LoadImages completed in 585.961713ms
	W0912 21:52:18.262189   60455 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7: no such file or directory
	I0912 21:52:18.262251   60455 ssh_runner.go:195] Run: crio config
	I0912 21:52:18.301189   60455 cni.go:84] Creating CNI manager for ""
	I0912 21:52:18.301207   60455 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 21:52:18.301225   60455 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 21:52:18.301242   60455 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-704515 NodeName:ingress-addon-legacy-704515 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0912 21:52:18.301373   60455 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-704515"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 21:52:18.301493   60455 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=ingress-addon-legacy-704515 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-704515 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 21:52:18.301540   60455 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0912 21:52:18.309318   60455 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 21:52:18.309369   60455 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 21:52:18.316678   60455 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (486 bytes)
	I0912 21:52:18.331709   60455 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0912 21:52:18.346581   60455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0912 21:52:18.361239   60455 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0912 21:52:18.364107   60455 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 21:52:18.373249   60455 certs.go:56] Setting up /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515 for IP: 192.168.49.2
	I0912 21:52:18.373282   60455 certs.go:190] acquiring lock for shared ca certs: {Name:mk61327f1fa12512fba6a15661f030034d23bf2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:18.373404   60455 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key
	I0912 21:52:18.373447   60455 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key
	I0912 21:52:18.373489   60455 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.key
	I0912 21:52:18.373508   60455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt with IP's: []
	I0912 21:52:18.527526   60455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt ...
	I0912 21:52:18.527554   60455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: {Name:mkc8ed93911991b2ee177a4ed0278821cda65a2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:18.527712   60455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.key ...
	I0912 21:52:18.527722   60455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.key: {Name:mk761b0909dbace55f5baf61e970bd44e8460b25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:18.527795   60455 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.key.dd3b5fb2
	I0912 21:52:18.527813   60455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0912 21:52:18.717343   60455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.crt.dd3b5fb2 ...
	I0912 21:52:18.717371   60455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.crt.dd3b5fb2: {Name:mkaaa1ad5f9fa2acb4a05026d24aecc14eb286e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:18.717515   60455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.key.dd3b5fb2 ...
	I0912 21:52:18.717526   60455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.key.dd3b5fb2: {Name:mk126c6e2781fdc9b02d677ed429aefe0d8b0822 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:18.717591   60455 certs.go:337] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.crt.dd3b5fb2 -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.crt
	I0912 21:52:18.717671   60455 certs.go:341] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.key.dd3b5fb2 -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.key
	I0912 21:52:18.717723   60455 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.key
	I0912 21:52:18.717742   60455 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.crt with IP's: []
	I0912 21:52:18.821447   60455 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.crt ...
	I0912 21:52:18.821478   60455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.crt: {Name:mkd39ee8079d45cf8cd4ef4f1c5943e0e4d532ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:18.821644   60455 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.key ...
	I0912 21:52:18.821654   60455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.key: {Name:mk25656e731acd13a6da40d87508016719051a4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:18.821727   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 21:52:18.821748   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 21:52:18.821765   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 21:52:18.821781   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 21:52:18.821791   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 21:52:18.821800   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 21:52:18.821812   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 21:52:18.821821   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 21:52:18.821870   60455 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem (1338 bytes)
	W0912 21:52:18.821904   60455 certs.go:433] ignoring /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698_empty.pem, impossibly tiny 0 bytes
	I0912 21:52:18.821913   60455 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 21:52:18.821954   60455 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem (1082 bytes)
	I0912 21:52:18.821982   60455 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem (1123 bytes)
	I0912 21:52:18.822007   60455 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem (1679 bytes)
	I0912 21:52:18.822048   60455 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem (1708 bytes)
	I0912 21:52:18.822076   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:52:18.822089   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem -> /usr/share/ca-certificates/22698.pem
	I0912 21:52:18.822098   60455 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> /usr/share/ca-certificates/226982.pem
	I0912 21:52:18.822656   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 21:52:18.843812   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0912 21:52:18.864119   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 21:52:18.884284   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 21:52:18.904282   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 21:52:18.924301   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 21:52:18.944352   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 21:52:18.964496   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 21:52:18.984923   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 21:52:19.004787   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem --> /usr/share/ca-certificates/22698.pem (1338 bytes)
	I0912 21:52:19.024831   60455 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /usr/share/ca-certificates/226982.pem (1708 bytes)
	I0912 21:52:19.044695   60455 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 21:52:19.059284   60455 ssh_runner.go:195] Run: openssl version
	I0912 21:52:19.064091   60455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22698.pem && ln -fs /usr/share/ca-certificates/22698.pem /etc/ssl/certs/22698.pem"
	I0912 21:52:19.071878   60455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22698.pem
	I0912 21:52:19.074875   60455 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:49 /usr/share/ca-certificates/22698.pem
	I0912 21:52:19.074914   60455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22698.pem
	I0912 21:52:19.080698   60455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22698.pem /etc/ssl/certs/51391683.0"
	I0912 21:52:19.088635   60455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/226982.pem && ln -fs /usr/share/ca-certificates/226982.pem /etc/ssl/certs/226982.pem"
	I0912 21:52:19.096629   60455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/226982.pem
	I0912 21:52:19.099426   60455 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:49 /usr/share/ca-certificates/226982.pem
	I0912 21:52:19.099463   60455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/226982.pem
	I0912 21:52:19.105284   60455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/226982.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 21:52:19.112949   60455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 21:52:19.120615   60455 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:52:19.123475   60455 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:52:19.123512   60455 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 21:52:19.129477   60455 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 21:52:19.137249   60455 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 21:52:19.139978   60455 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 21:52:19.140022   60455 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-704515 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-704515 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 21:52:19.140108   60455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 21:52:19.140139   60455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 21:52:19.171047   60455 cri.go:89] found id: ""
	I0912 21:52:19.171108   60455 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 21:52:19.178730   60455 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 21:52:19.186129   60455 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0912 21:52:19.186169   60455 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 21:52:19.193269   60455 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 21:52:19.193306   60455 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 21:52:19.234616   60455 kubeadm.go:322] W0912 21:52:19.234018    1375 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0912 21:52:19.271489   60455 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0912 21:52:19.335283   60455 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 21:52:21.292826   60455 kubeadm.go:322] W0912 21:52:21.292436    1375 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0912 21:52:21.293867   60455 kubeadm.go:322] W0912 21:52:21.293574    1375 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0912 21:52:29.245663   60455 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0912 21:52:29.245740   60455 kubeadm.go:322] [preflight] Running pre-flight checks
	I0912 21:52:29.245862   60455 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0912 21:52:29.245936   60455 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0912 21:52:29.245967   60455 kubeadm.go:322] OS: Linux
	I0912 21:52:29.246013   60455 kubeadm.go:322] CGROUPS_CPU: enabled
	I0912 21:52:29.246053   60455 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0912 21:52:29.246101   60455 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0912 21:52:29.246143   60455 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0912 21:52:29.246216   60455 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0912 21:52:29.246294   60455 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0912 21:52:29.246398   60455 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 21:52:29.246537   60455 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 21:52:29.246688   60455 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 21:52:29.246782   60455 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 21:52:29.246853   60455 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 21:52:29.246889   60455 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0912 21:52:29.246944   60455 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 21:52:29.249244   60455 out.go:204]   - Generating certificates and keys ...
	I0912 21:52:29.249340   60455 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0912 21:52:29.249407   60455 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0912 21:52:29.249476   60455 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 21:52:29.249532   60455 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0912 21:52:29.249595   60455 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0912 21:52:29.249641   60455 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0912 21:52:29.249696   60455 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0912 21:52:29.249800   60455 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-704515 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:52:29.249844   60455 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0912 21:52:29.250013   60455 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-704515 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0912 21:52:29.250078   60455 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 21:52:29.250130   60455 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 21:52:29.250179   60455 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0912 21:52:29.250226   60455 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 21:52:29.250285   60455 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 21:52:29.250349   60455 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 21:52:29.250421   60455 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 21:52:29.250495   60455 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 21:52:29.250567   60455 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 21:52:29.251735   60455 out.go:204]   - Booting up control plane ...
	I0912 21:52:29.251805   60455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 21:52:29.251867   60455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 21:52:29.251933   60455 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 21:52:29.252025   60455 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 21:52:29.252197   60455 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 21:52:29.252287   60455 kubeadm.go:322] [apiclient] All control plane components are healthy after 6.502556 seconds
	I0912 21:52:29.252405   60455 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 21:52:29.252559   60455 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 21:52:29.252654   60455 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 21:52:29.252763   60455 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-704515 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0912 21:52:29.252814   60455 kubeadm.go:322] [bootstrap-token] Using token: jrauyj.z9rqzv6whm3z2xsn
	I0912 21:52:29.254034   60455 out.go:204]   - Configuring RBAC rules ...
	I0912 21:52:29.254117   60455 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 21:52:29.254199   60455 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 21:52:29.254363   60455 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 21:52:29.254475   60455 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 21:52:29.254608   60455 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 21:52:29.254695   60455 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 21:52:29.254800   60455 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 21:52:29.254879   60455 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0912 21:52:29.254936   60455 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0912 21:52:29.254947   60455 kubeadm.go:322] 
	I0912 21:52:29.255017   60455 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0912 21:52:29.255026   60455 kubeadm.go:322] 
	I0912 21:52:29.255120   60455 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0912 21:52:29.255140   60455 kubeadm.go:322] 
	I0912 21:52:29.255173   60455 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0912 21:52:29.255253   60455 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 21:52:29.255367   60455 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 21:52:29.255379   60455 kubeadm.go:322] 
	I0912 21:52:29.255448   60455 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0912 21:52:29.255559   60455 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 21:52:29.255668   60455 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 21:52:29.255679   60455 kubeadm.go:322] 
	I0912 21:52:29.255751   60455 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 21:52:29.255842   60455 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0912 21:52:29.255853   60455 kubeadm.go:322] 
	I0912 21:52:29.255972   60455 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token jrauyj.z9rqzv6whm3z2xsn \
	I0912 21:52:29.256107   60455 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 \
	I0912 21:52:29.256152   60455 kubeadm.go:322]     --control-plane 
	I0912 21:52:29.256165   60455 kubeadm.go:322] 
	I0912 21:52:29.256266   60455 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0912 21:52:29.256273   60455 kubeadm.go:322] 
	I0912 21:52:29.256337   60455 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token jrauyj.z9rqzv6whm3z2xsn \
	I0912 21:52:29.256433   60455 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 
	I0912 21:52:29.256452   60455 cni.go:84] Creating CNI manager for ""
	I0912 21:52:29.256462   60455 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 21:52:29.257602   60455 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 21:52:29.258845   60455 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 21:52:29.262362   60455 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.18.20/kubectl ...
	I0912 21:52:29.262376   60455 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 21:52:29.277851   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 21:52:29.646917   60455 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 21:52:29.646986   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:29.647023   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1 minikube.k8s.io/name=ingress-addon-legacy-704515 minikube.k8s.io/updated_at=2023_09_12T21_52_29_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:29.654721   60455 ops.go:34] apiserver oom_adj: -16
	I0912 21:52:29.749859   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:29.834024   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:30.398433   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:30.898867   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:31.397873   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:31.898274   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:32.398771   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:32.898097   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:33.398763   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:33.898658   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:34.398116   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:34.898801   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:35.397924   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:35.898415   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:36.398184   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:36.898780   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:37.398315   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:37.898213   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:38.398536   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:38.898796   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:39.398046   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:39.897987   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:40.398535   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:40.898227   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:41.398826   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:41.898210   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:42.398864   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:42.897921   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:43.398120   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:43.898907   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:44.397879   60455 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 21:52:44.463062   60455 kubeadm.go:1081] duration metric: took 14.816119815s to wait for elevateKubeSystemPrivileges.
	I0912 21:52:44.463095   60455 kubeadm.go:406] StartCluster complete in 25.323076348s
	I0912 21:52:44.463114   60455 settings.go:142] acquiring lock: {Name:mk27d6c9e2209c1484da49df89f359f1b22a9261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:44.463183   60455 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 21:52:44.463853   60455 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/kubeconfig: {Name:mk41a52745552a5cecc3511e6da68b50fcd6941f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:52:44.464057   60455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 21:52:44.464195   60455 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0912 21:52:44.464278   60455 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-704515"
	I0912 21:52:44.464295   60455 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-704515"
	I0912 21:52:44.464293   60455 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-704515"
	I0912 21:52:44.464320   60455 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-704515"
	I0912 21:52:44.464319   60455 config.go:182] Loaded profile config "ingress-addon-legacy-704515": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0912 21:52:44.464350   60455 host.go:66] Checking if "ingress-addon-legacy-704515" exists ...
	I0912 21:52:44.464653   60455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-704515 --format={{.State.Status}}
	I0912 21:52:44.464773   60455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-704515 --format={{.State.Status}}
	I0912 21:52:44.464779   60455 kapi.go:59] client config for ingress-addon-legacy-704515: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 21:52:44.465504   60455 cert_rotation.go:137] Starting client certificate rotation controller
	I0912 21:52:44.480889   60455 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-704515" context rescaled to 1 replicas
	I0912 21:52:44.480930   60455 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 21:52:44.482901   60455 out.go:177] * Verifying Kubernetes components...
	I0912 21:52:44.485120   60455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:52:44.484074   60455 kapi.go:59] client config for ingress-addon-legacy-704515: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 21:52:44.487943   60455 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-704515"
	I0912 21:52:44.487990   60455 host.go:66] Checking if "ingress-addon-legacy-704515" exists ...
	I0912 21:52:44.488507   60455 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-704515 --format={{.State.Status}}
	I0912 21:52:44.492507   60455 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 21:52:44.493938   60455 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:52:44.493959   60455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 21:52:44.494017   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:44.509949   60455 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 21:52:44.509974   60455 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 21:52:44.510028   60455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-704515
	I0912 21:52:44.514678   60455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa Username:docker}
	I0912 21:52:44.530433   60455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/ingress-addon-legacy-704515/id_rsa Username:docker}
	I0912 21:52:44.625466   60455 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 21:52:44.626028   60455 kapi.go:59] client config for ingress-addon-legacy-704515: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8
(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 21:52:44.626316   60455 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-704515" to be "Ready" ...
	I0912 21:52:44.729034   60455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 21:52:44.730453   60455 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 21:52:45.127399   60455 start.go:917] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0912 21:52:45.253894   60455 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0912 21:52:45.255130   60455 addons.go:502] enable addons completed in 790.931973ms: enabled=[default-storageclass storage-provisioner]
	I0912 21:52:46.635231   60455 node_ready.go:58] node "ingress-addon-legacy-704515" has status "Ready":"False"
	I0912 21:52:49.134396   60455 node_ready.go:58] node "ingress-addon-legacy-704515" has status "Ready":"False"
	I0912 21:52:49.634103   60455 node_ready.go:49] node "ingress-addon-legacy-704515" has status "Ready":"True"
	I0912 21:52:49.634129   60455 node_ready.go:38] duration metric: took 5.007783418s waiting for node "ingress-addon-legacy-704515" to be "Ready" ...
	I0912 21:52:49.634138   60455 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:52:49.639911   60455 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-bvhl5" in "kube-system" namespace to be "Ready" ...
	I0912 21:52:51.647711   60455 pod_ready.go:102] pod "coredns-66bff467f8-bvhl5" in "kube-system" namespace doesn't have "Ready" status: {Phase:Pending Conditions:[{Type:PodScheduled Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-09-12 21:52:44 +0000 UTC Reason:Unschedulable Message:0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.}] Message: Reason: NominatedNodeName: HostIP: HostIPs:[] PodIP: PodIPs:[] StartTime:<nil> InitContainerStatuses:[] ContainerStatuses:[] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0912 21:52:54.149320   60455 pod_ready.go:102] pod "coredns-66bff467f8-bvhl5" in "kube-system" namespace has status "Ready":"False"
	I0912 21:52:56.649129   60455 pod_ready.go:102] pod "coredns-66bff467f8-bvhl5" in "kube-system" namespace has status "Ready":"False"
	I0912 21:52:59.149417   60455 pod_ready.go:102] pod "coredns-66bff467f8-bvhl5" in "kube-system" namespace has status "Ready":"False"
	I0912 21:53:01.649453   60455 pod_ready.go:102] pod "coredns-66bff467f8-bvhl5" in "kube-system" namespace has status "Ready":"False"
	I0912 21:53:03.649440   60455 pod_ready.go:92] pod "coredns-66bff467f8-bvhl5" in "kube-system" namespace has status "Ready":"True"
	I0912 21:53:03.649465   60455 pod_ready.go:81] duration metric: took 14.009528552s waiting for pod "coredns-66bff467f8-bvhl5" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.649473   60455 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-704515" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.653423   60455 pod_ready.go:92] pod "etcd-ingress-addon-legacy-704515" in "kube-system" namespace has status "Ready":"True"
	I0912 21:53:03.653442   60455 pod_ready.go:81] duration metric: took 3.962312ms waiting for pod "etcd-ingress-addon-legacy-704515" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.653455   60455 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-704515" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.657251   60455 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-704515" in "kube-system" namespace has status "Ready":"True"
	I0912 21:53:03.657269   60455 pod_ready.go:81] duration metric: took 3.806996ms waiting for pod "kube-apiserver-ingress-addon-legacy-704515" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.657278   60455 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-704515" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.660978   60455 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-704515" in "kube-system" namespace has status "Ready":"True"
	I0912 21:53:03.661000   60455 pod_ready.go:81] duration metric: took 3.714685ms waiting for pod "kube-controller-manager-ingress-addon-legacy-704515" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.661012   60455 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-m2jbm" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.664537   60455 pod_ready.go:92] pod "kube-proxy-m2jbm" in "kube-system" namespace has status "Ready":"True"
	I0912 21:53:03.664561   60455 pod_ready.go:81] duration metric: took 3.536485ms waiting for pod "kube-proxy-m2jbm" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.664572   60455 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-704515" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:03.845966   60455 request.go:629] Waited for 181.311927ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-704515
	I0912 21:53:04.045984   60455 request.go:629] Waited for 197.37964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes/ingress-addon-legacy-704515
	I0912 21:53:04.048732   60455 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-704515" in "kube-system" namespace has status "Ready":"True"
	I0912 21:53:04.048750   60455 pod_ready.go:81] duration metric: took 384.170831ms waiting for pod "kube-scheduler-ingress-addon-legacy-704515" in "kube-system" namespace to be "Ready" ...
	I0912 21:53:04.048762   60455 pod_ready.go:38] duration metric: took 14.414614893s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 21:53:04.048782   60455 api_server.go:52] waiting for apiserver process to appear ...
	I0912 21:53:04.048829   60455 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 21:53:04.058880   60455 api_server.go:72] duration metric: took 19.577907858s to wait for apiserver process to appear ...
	I0912 21:53:04.058906   60455 api_server.go:88] waiting for apiserver healthz status ...
	I0912 21:53:04.058921   60455 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0912 21:53:04.063343   60455 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0912 21:53:04.064057   60455 api_server.go:141] control plane version: v1.18.20
	I0912 21:53:04.064076   60455 api_server.go:131] duration metric: took 5.165072ms to wait for apiserver health ...
	I0912 21:53:04.064084   60455 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 21:53:04.245445   60455 request.go:629] Waited for 181.297109ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0912 21:53:04.250640   60455 system_pods.go:59] 8 kube-system pods found
	I0912 21:53:04.250687   60455 system_pods.go:61] "coredns-66bff467f8-bvhl5" [a8185f85-80d8-405c-a5f6-e4c4ef015dba] Running
	I0912 21:53:04.250697   60455 system_pods.go:61] "etcd-ingress-addon-legacy-704515" [a18bfff2-79b5-4c91-85a8-73aba9b069d2] Running
	I0912 21:53:04.250704   60455 system_pods.go:61] "kindnet-xv2bx" [4d52742d-5549-462f-add0-b77cf353afe9] Running
	I0912 21:53:04.250710   60455 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-704515" [12ced488-091e-48c0-a1f5-3fe5f9e7550f] Running
	I0912 21:53:04.250723   60455 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-704515" [5810a7b4-68c8-4b30-b1e0-e3fcfe702d61] Running
	I0912 21:53:04.250736   60455 system_pods.go:61] "kube-proxy-m2jbm" [7a6b40bf-a054-4a69-9973-8d0bad1c905e] Running
	I0912 21:53:04.250742   60455 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-704515" [a8b36b90-5feb-402d-9537-bcf6c83c009c] Running
	I0912 21:53:04.250749   60455 system_pods.go:61] "storage-provisioner" [f2e5aefc-0123-4b0e-ba57-eb36f6ee123c] Running
	I0912 21:53:04.250761   60455 system_pods.go:74] duration metric: took 186.670363ms to wait for pod list to return data ...
	I0912 21:53:04.250774   60455 default_sa.go:34] waiting for default service account to be created ...
	I0912 21:53:04.445127   60455 request.go:629] Waited for 194.268164ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/default/serviceaccounts
	I0912 21:53:04.447365   60455 default_sa.go:45] found service account: "default"
	I0912 21:53:04.447388   60455 default_sa.go:55] duration metric: took 196.605798ms for default service account to be created ...
	I0912 21:53:04.447396   60455 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 21:53:04.645648   60455 request.go:629] Waited for 198.15657ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/namespaces/kube-system/pods
	I0912 21:53:04.651215   60455 system_pods.go:86] 8 kube-system pods found
	I0912 21:53:04.651243   60455 system_pods.go:89] "coredns-66bff467f8-bvhl5" [a8185f85-80d8-405c-a5f6-e4c4ef015dba] Running
	I0912 21:53:04.651248   60455 system_pods.go:89] "etcd-ingress-addon-legacy-704515" [a18bfff2-79b5-4c91-85a8-73aba9b069d2] Running
	I0912 21:53:04.651252   60455 system_pods.go:89] "kindnet-xv2bx" [4d52742d-5549-462f-add0-b77cf353afe9] Running
	I0912 21:53:04.651260   60455 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-704515" [12ced488-091e-48c0-a1f5-3fe5f9e7550f] Running
	I0912 21:53:04.651266   60455 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-704515" [5810a7b4-68c8-4b30-b1e0-e3fcfe702d61] Running
	I0912 21:53:04.651272   60455 system_pods.go:89] "kube-proxy-m2jbm" [7a6b40bf-a054-4a69-9973-8d0bad1c905e] Running
	I0912 21:53:04.651279   60455 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-704515" [a8b36b90-5feb-402d-9537-bcf6c83c009c] Running
	I0912 21:53:04.651286   60455 system_pods.go:89] "storage-provisioner" [f2e5aefc-0123-4b0e-ba57-eb36f6ee123c] Running
	I0912 21:53:04.651297   60455 system_pods.go:126] duration metric: took 203.895893ms to wait for k8s-apps to be running ...
	I0912 21:53:04.651308   60455 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 21:53:04.651352   60455 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 21:53:04.661651   60455 system_svc.go:56] duration metric: took 10.331688ms WaitForService to wait for kubelet.
	I0912 21:53:04.661674   60455 kubeadm.go:581] duration metric: took 20.180709606s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 21:53:04.661707   60455 node_conditions.go:102] verifying NodePressure condition ...
	I0912 21:53:04.845021   60455 request.go:629] Waited for 183.244726ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.49.2:8443/api/v1/nodes
	I0912 21:53:04.847534   60455 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 21:53:04.847557   60455 node_conditions.go:123] node cpu capacity is 8
	I0912 21:53:04.847566   60455 node_conditions.go:105] duration metric: took 185.854717ms to run NodePressure ...
	I0912 21:53:04.847576   60455 start.go:228] waiting for startup goroutines ...
	I0912 21:53:04.847583   60455 start.go:233] waiting for cluster config update ...
	I0912 21:53:04.847595   60455 start.go:242] writing updated cluster config ...
	I0912 21:53:04.847859   60455 ssh_runner.go:195] Run: rm -f paused
	I0912 21:53:04.892145   60455 start.go:600] kubectl: 1.28.1, cluster: 1.18.20 (minor skew: 10)
	I0912 21:53:04.894064   60455 out.go:177] 
	W0912 21:53:04.895503   60455 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0912 21:53:04.896915   60455 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0912 21:53:04.898287   60455 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-704515" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 12 21:55:54 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:55:54.010175603Z" level=info msg="Created container ed938c6918557dbf47501a365a3a86bd451ffb31a89ac41522252982204d96c7: default/hello-world-app-5f5d8b66bb-jmjmn/hello-world-app" id=d0a87348-a776-41b3-a293-a0c885d355c2 name=/runtime.v1alpha2.RuntimeService/CreateContainer
	Sep 12 21:55:54 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:55:54.010697521Z" level=info msg="Starting container: ed938c6918557dbf47501a365a3a86bd451ffb31a89ac41522252982204d96c7" id=77cfa9de-d0cd-4bfe-8988-147c54c8ecf0 name=/runtime.v1alpha2.RuntimeService/StartContainer
	Sep 12 21:55:54 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:55:54.019112329Z" level=info msg="Started container" PID=4927 containerID=ed938c6918557dbf47501a365a3a86bd451ffb31a89ac41522252982204d96c7 description=default/hello-world-app-5f5d8b66bb-jmjmn/hello-world-app id=77cfa9de-d0cd-4bfe-8988-147c54c8ecf0 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=72d7a53ffa75448904213a3eaa8d0f905743f885b8b0e3285f4d24f0ccf28419
	Sep 12 21:55:56 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:55:56.454170311Z" level=info msg="Checking image status: cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" id=31f89b1b-b27a-437f-8985-a89d19131feb name=/runtime.v1alpha2.ImageService/ImageStatus
	Sep 12 21:56:09 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:09.454899817Z" level=info msg="Stopping pod sandbox: 0460e6229f49ec1a3a2bb22c9d2815f5060cdd9be200c3ef021c312fb7710f53" id=f225a631-9459-4032-ac9d-c29f6e6f7bf1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 12 21:56:09 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:09.456051315Z" level=info msg="Stopped pod sandbox: 0460e6229f49ec1a3a2bb22c9d2815f5060cdd9be200c3ef021c312fb7710f53" id=f225a631-9459-4032-ac9d-c29f6e6f7bf1 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 12 21:56:09 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:09.464035618Z" level=info msg="Stopping pod sandbox: 0460e6229f49ec1a3a2bb22c9d2815f5060cdd9be200c3ef021c312fb7710f53" id=66d4b5da-7c10-4dcc-9185-5f1b990d8fc4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 12 21:56:09 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:09.464084418Z" level=info msg="Stopped pod sandbox (already stopped): 0460e6229f49ec1a3a2bb22c9d2815f5060cdd9be200c3ef021c312fb7710f53" id=66d4b5da-7c10-4dcc-9185-5f1b990d8fc4 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 12 21:56:10 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:10.206450746Z" level=info msg="Stopping container: cece25fa052cb20fa8e3535064e5f809d7e31fc3977f5f68db9d9c97a3ffc6d9 (timeout: 2s)" id=dc94ef60-1f58-4d86-98b4-2b7531ffddf0 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 12 21:56:10 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:10.208634664Z" level=info msg="Stopping container: cece25fa052cb20fa8e3535064e5f809d7e31fc3977f5f68db9d9c97a3ffc6d9 (timeout: 2s)" id=44abaab7-7c3b-4ae9-b8df-9dc7c79df4fd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.215994577Z" level=warning msg="Stopping container cece25fa052cb20fa8e3535064e5f809d7e31fc3977f5f68db9d9c97a3ffc6d9 with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=dc94ef60-1f58-4d86-98b4-2b7531ffddf0 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 12 21:56:12 ingress-addon-legacy-704515 conmon[3457]: conmon cece25fa052cb20fa8e3 <ninfo>: container 3469 exited with status 137
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.377100030Z" level=info msg="Stopped container cece25fa052cb20fa8e3535064e5f809d7e31fc3977f5f68db9d9c97a3ffc6d9: ingress-nginx/ingress-nginx-controller-7fcf777cb7-tc48k/controller" id=44abaab7-7c3b-4ae9-b8df-9dc7c79df4fd name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.377513662Z" level=info msg="Stopped container cece25fa052cb20fa8e3535064e5f809d7e31fc3977f5f68db9d9c97a3ffc6d9: ingress-nginx/ingress-nginx-controller-7fcf777cb7-tc48k/controller" id=dc94ef60-1f58-4d86-98b4-2b7531ffddf0 name=/runtime.v1alpha2.RuntimeService/StopContainer
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.377786510Z" level=info msg="Stopping pod sandbox: 09260c8a13bbc8fa0bc8e5afbb17184c6d1aab8ecf0f1d63bbb86affc65e0079" id=14d1937b-b33e-436c-8cea-46c0609c2e00 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.377902542Z" level=info msg="Stopping pod sandbox: 09260c8a13bbc8fa0bc8e5afbb17184c6d1aab8ecf0f1d63bbb86affc65e0079" id=02732ba9-5dc9-4c0f-85a7-6bfb5855f2ed name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.380435139Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-IFGB2DZNS77MLV6V - [0:0]\n:KUBE-HP-3V7HFPS3YW4HF3CZ - [0:0]\n-X KUBE-HP-3V7HFPS3YW4HF3CZ\n-X KUBE-HP-IFGB2DZNS77MLV6V\nCOMMIT\n"
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.381747954Z" level=info msg="Closing host port tcp:80"
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.381779663Z" level=info msg="Closing host port tcp:443"
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.382662322Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.382681070Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.382789798Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-7fcf777cb7-tc48k Namespace:ingress-nginx ID:09260c8a13bbc8fa0bc8e5afbb17184c6d1aab8ecf0f1d63bbb86affc65e0079 UID:c4533e22-0e99-43ad-a014-18303755a9bd NetNS:/var/run/netns/0e2a9d4a-4376-4557-8ce0-165c5b639afe Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.382898509Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-7fcf777cb7-tc48k from CNI network \"kindnet\" (type=ptp)"
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.425878011Z" level=info msg="Stopped pod sandbox: 09260c8a13bbc8fa0bc8e5afbb17184c6d1aab8ecf0f1d63bbb86affc65e0079" id=14d1937b-b33e-436c-8cea-46c0609c2e00 name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	Sep 12 21:56:12 ingress-addon-legacy-704515 crio[957]: time="2023-09-12 21:56:12.426016239Z" level=info msg="Stopped pod sandbox (already stopped): 09260c8a13bbc8fa0bc8e5afbb17184c6d1aab8ecf0f1d63bbb86affc65e0079" id=02732ba9-5dc9-4c0f-85a7-6bfb5855f2ed name=/runtime.v1alpha2.RuntimeService/StopPodSandbox
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ed938c6918557       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb            23 seconds ago      Running             hello-world-app           0                   72d7a53ffa754       hello-world-app-5f5d8b66bb-jmjmn
	8049b871e8eac       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   4745e3c071962       nginx
	cece25fa052cb       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   09260c8a13bbc       ingress-nginx-controller-7fcf777cb7-tc48k
	2b412d6c23827       a013daf8730dbb3908d66f67c57053f09055fddb28fde0b5808cb24c27900dc8                                                   3 minutes ago       Exited              patch                     1                   2490b1b82f00c       ingress-nginx-admission-patch-fnzmm
	3efe3edac4fda       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   0ef60e39be3d3       ingress-nginx-admission-create-vm4p6
	13032452380c8       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   b36ba987d3b0c       coredns-66bff467f8-bvhl5
	915de5d5fcb3f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   a806addaaf1ce       storage-provisioner
	763adb1d9f486       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052                 3 minutes ago       Running             kindnet-cni               0                   b1deb20f23861       kindnet-xv2bx
	6cefc1d89e71e       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   178e4ba4c21dd       kube-proxy-m2jbm
	4fcd707825f54       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   ac1ee10feac8f       kube-scheduler-ingress-addon-legacy-704515
	0c5be7927910c       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   22d519d5a6f60       kube-controller-manager-ingress-addon-legacy-704515
	c465e263515c6       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   f036e3b55875b       kube-apiserver-ingress-addon-legacy-704515
	7b62536c8db3b       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   03418eacafa70       etcd-ingress-addon-legacy-704515
	
	* 
	* ==> coredns [13032452380c81e0dea26250c188488279dd6de6b2327c37e636d5eb97d9f257] <==
	* [INFO] 10.244.0.5:56401 - 49494 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005464624s
	[INFO] 10.244.0.5:58752 - 47303 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004907443s
	[INFO] 10.244.0.5:32872 - 64491 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004813654s
	[INFO] 10.244.0.5:48053 - 13698 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004677252s
	[INFO] 10.244.0.5:49547 - 59573 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004960845s
	[INFO] 10.244.0.5:56401 - 35999 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004765184s
	[INFO] 10.244.0.5:54369 - 4781 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.004971109s
	[INFO] 10.244.0.5:33067 - 10232 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005118789s
	[INFO] 10.244.0.5:46212 - 17158 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005030244s
	[INFO] 10.244.0.5:46212 - 17973 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00498707s
	[INFO] 10.244.0.5:49547 - 64352 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005265548s
	[INFO] 10.244.0.5:56401 - 35775 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00527599s
	[INFO] 10.244.0.5:33067 - 4488 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005190374s
	[INFO] 10.244.0.5:32872 - 24153 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005444222s
	[INFO] 10.244.0.5:54369 - 59485 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005276021s
	[INFO] 10.244.0.5:58752 - 15342 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005425675s
	[INFO] 10.244.0.5:48053 - 22287 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00550554s
	[INFO] 10.244.0.5:32872 - 23173 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000043947s
	[INFO] 10.244.0.5:33067 - 27980 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073563s
	[INFO] 10.244.0.5:49547 - 65101 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00026418s
	[INFO] 10.244.0.5:56401 - 37025 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000170154s
	[INFO] 10.244.0.5:46212 - 45847 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0002706s
	[INFO] 10.244.0.5:54369 - 27710 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000259662s
	[INFO] 10.244.0.5:48053 - 54102 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000227088s
	[INFO] 10.244.0.5:58752 - 47192 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000248059s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-704515
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-704515
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1
	                    minikube.k8s.io/name=ingress-addon-legacy-704515
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T21_52_29_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 21:52:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-704515
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 21:56:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 21:55:59 +0000   Tue, 12 Sep 2023 21:52:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 21:55:59 +0000   Tue, 12 Sep 2023 21:52:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 21:55:59 +0000   Tue, 12 Sep 2023 21:52:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 21:55:59 +0000   Tue, 12 Sep 2023 21:52:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ingress-addon-legacy-704515
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 9756c26ef36c417297a23e0fd4ea02f4
	  System UUID:                75634d8c-03f2-46cf-a571-094c93cebc52
	  Boot ID:                    ba5f5c49-ab96-46a2-94a7-f55592fcb8c1
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-jmjmn                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m44s
	  kube-system                 coredns-66bff467f8-bvhl5                               100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m33s
	  kube-system                 etcd-ingress-addon-legacy-704515                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kindnet-xv2bx                                          100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m33s
	  kube-system                 kube-apiserver-ingress-addon-legacy-704515             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-704515    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 kube-proxy-m2jbm                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	  kube-system                 kube-scheduler-ingress-addon-legacy-704515             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m47s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   100m (1%!)(MISSING)
	  memory             120Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From        Message
	  ----    ------                   ----   ----        -------
	  Normal  Starting                 3m48s  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s  kubelet     Node ingress-addon-legacy-704515 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s  kubelet     Node ingress-addon-legacy-704515 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s  kubelet     Node ingress-addon-legacy-704515 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m32s  kube-proxy  Starting kube-proxy.
	  Normal  NodeReady                3m28s  kubelet     Node ingress-addon-legacy-704515 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004922] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006591] FS-Cache: N-cookie d=00000000a5c12aec{9p.inode} n=00000000085c872c
	[  +0.007353] FS-Cache: N-key=[8] '7ca00f0200000000'
	[  +0.418767] FS-Cache: Duplicate cookie detected
	[  +0.004695] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006744] FS-Cache: O-cookie d=00000000a5c12aec{9p.inode} n=000000000cd7d0be
	[  +0.007348] FS-Cache: O-key=[8] '83a00f0200000000'
	[  +0.005008] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007975] FS-Cache: N-cookie d=00000000a5c12aec{9p.inode} n=000000000493d8bd
	[  +0.008751] FS-Cache: N-key=[8] '83a00f0200000000'
	[ +18.292989] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep12 21:53] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[  +1.004063] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[  +2.015757] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[  +4.191565] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[  +8.191236] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[Sep12 21:54] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[ +32.764792] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	
	* 
	* ==> etcd [7b62536c8db3b440db1ada7f0383d0a2d9d8ab3ec549da802b68fe168e4495e1] <==
	* raft2023/09/12 21:52:22 INFO: aec36adc501070cc became follower at term 0
	raft2023/09/12 21:52:22 INFO: newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	raft2023/09/12 21:52:22 INFO: aec36adc501070cc became follower at term 1
	raft2023/09/12 21:52:22 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-12 21:52:22.825601 W | auth: simple token is not cryptographically signed
	2023-09-12 21:52:22.828551 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-12 21:52:22.828914 I | etcdserver: aec36adc501070cc as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/12 21:52:22 INFO: aec36adc501070cc switched to configuration voters=(12593026477526642892)
	2023-09-12 21:52:22.829325 I | etcdserver/membership: added member aec36adc501070cc [https://192.168.49.2:2380] to cluster fa54960ea34d58be
	2023-09-12 21:52:22.831433 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-12 21:52:22.831649 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-12 21:52:22.831726 I | embed: listening for peers on 192.168.49.2:2380
	raft2023/09/12 21:52:23 INFO: aec36adc501070cc is starting a new election at term 1
	raft2023/09/12 21:52:23 INFO: aec36adc501070cc became candidate at term 2
	raft2023/09/12 21:52:23 INFO: aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2
	raft2023/09/12 21:52:23 INFO: aec36adc501070cc became leader at term 2
	raft2023/09/12 21:52:23 INFO: raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2
	2023-09-12 21:52:23.268637 I | embed: ready to serve client requests
	2023-09-12 21:52:23.268664 I | embed: ready to serve client requests
	2023-09-12 21:52:23.268751 I | etcdserver: published {Name:ingress-addon-legacy-704515 ClientURLs:[https://192.168.49.2:2379]} to cluster fa54960ea34d58be
	2023-09-12 21:52:23.268875 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-12 21:52:23.269122 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-12 21:52:23.269187 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-12 21:52:23.271433 I | embed: serving client requests on 192.168.49.2:2379
	2023-09-12 21:52:23.271547 I | embed: serving client requests on 127.0.0.1:2379
	
	* 
	* ==> kernel <==
	*  21:56:17 up  1:38,  0 users,  load average: 0.38, 0.65, 0.48
	Linux ingress-addon-legacy-704515 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [763adb1d9f486d63ad7adf0455a9bccba237937b90fc98f711d1a176d3f0000c] <==
	* I0912 21:54:08.483511       1 main.go:227] handling current node
	I0912 21:54:18.486620       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:54:18.486645       1 main.go:227] handling current node
	I0912 21:54:28.497252       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:54:28.497275       1 main.go:227] handling current node
	I0912 21:54:38.501972       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:54:38.502005       1 main.go:227] handling current node
	I0912 21:54:48.505246       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:54:48.505268       1 main.go:227] handling current node
	I0912 21:54:58.509841       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:54:58.509871       1 main.go:227] handling current node
	I0912 21:55:08.519521       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:55:08.519542       1 main.go:227] handling current node
	I0912 21:55:18.522826       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:55:18.522849       1 main.go:227] handling current node
	I0912 21:55:28.531426       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:55:28.531451       1 main.go:227] handling current node
	I0912 21:55:38.535079       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:55:38.535106       1 main.go:227] handling current node
	I0912 21:55:48.540675       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:55:48.540699       1 main.go:227] handling current node
	I0912 21:55:58.543650       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:55:58.543675       1 main.go:227] handling current node
	I0912 21:56:08.555430       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0912 21:56:08.555453       1 main.go:227] handling current node
	
	* 
	* ==> kube-apiserver [c465e263515c6a7ea38fccfc243fb02ed73f787fd6cd810f376b81a2b13fa75e] <==
	* I0912 21:52:26.460543       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
	I0912 21:52:26.460569       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0912 21:52:26.620756       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 21:52:26.620857       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0912 21:52:26.620871       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0912 21:52:26.635454       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0912 21:52:26.635454       1 cache.go:39] Caches are synced for autoregister controller
	I0912 21:52:27.433719       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0912 21:52:27.433747       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0912 21:52:27.442145       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0912 21:52:27.444700       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0912 21:52:27.444717       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0912 21:52:27.687640       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 21:52:27.723773       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0912 21:52:27.849893       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0912 21:52:27.850649       1 controller.go:609] quota admission added evaluator for: endpoints
	I0912 21:52:27.853449       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 21:52:28.795622       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0912 21:52:29.071890       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0912 21:52:29.234126       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0912 21:52:29.440401       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 21:52:44.505461       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0912 21:52:44.826314       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0912 21:53:05.551944       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0912 21:53:33.580685       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [0c5be7927910c0fda55f7fdcd7c5a79fce4ae4cbd88f9f0a32aa88c1fe916ef2] <==
	* I0912 21:52:44.822059       1 shared_informer.go:230] Caches are synced for daemon sets 
	I0912 21:52:44.833377       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"76338c4f-6023-4262-9180-f6b899011df5", APIVersion:"apps/v1", ResourceVersion:"224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-xv2bx
	I0912 21:52:44.921351       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0912 21:52:44.921464       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0912 21:52:44.921733       1 shared_informer.go:230] Caches are synced for taint 
	I0912 21:52:44.922120       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	I0912 21:52:44.921787       1 shared_informer.go:230] Caches are synced for resource quota 
	I0912 21:52:44.921805       1 shared_informer.go:230] Caches are synced for resource quota 
	W0912 21:52:44.922282       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-704515. Assuming now as a timestamp.
	I0912 21:52:44.922327       1 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0912 21:52:44.922468       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0912 21:52:44.922869       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-704515", UID:"038e2d67-a919-48cf-8e4f-b918000aa232", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-704515 event: Registered Node ingress-addon-legacy-704515 in Controller
	I0912 21:52:44.942963       1 shared_informer.go:223] Waiting for caches to sync for garbage collector
	I0912 21:52:44.943119       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0912 21:52:45.022630       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"85f7de8f-3190-4d14-a2b5-dbfeeef9c3d6", APIVersion:"apps/v1", ResourceVersion:"206", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-m2jbm
	I0912 21:52:49.922593       1 node_lifecycle_controller.go:1226] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0912 21:53:05.543350       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5bd464d1-c21f-4fc6-9771-c8fea64eaf20", APIVersion:"apps/v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0912 21:53:05.548488       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"f0eafccf-5a05-43a2-9a7b-ac0e5f03aa62", APIVersion:"apps/v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-tc48k
	I0912 21:53:05.559186       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b1db79f3-b033-4bbd-a762-ef41dadeaaf0", APIVersion:"batch/v1", ResourceVersion:"467", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-vm4p6
	I0912 21:53:05.632722       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"03337969-6f6f-4378-ad09-1ce160cf6c6f", APIVersion:"batch/v1", ResourceVersion:"477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-fnzmm
	I0912 21:53:08.540343       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b1db79f3-b033-4bbd-a762-ef41dadeaaf0", APIVersion:"batch/v1", ResourceVersion:"475", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0912 21:53:08.546585       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"03337969-6f6f-4378-ad09-1ce160cf6c6f", APIVersion:"batch/v1", ResourceVersion:"483", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0912 21:55:52.027748       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"14be30b5-5f79-4ac9-85f6-c10cb91cf594", APIVersion:"apps/v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0912 21:55:52.034055       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"39f1eb9c-ac54-47fb-aebb-53ccb44a1408", APIVersion:"apps/v1", ResourceVersion:"706", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-jmjmn
	E0912 21:56:14.979034       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-2m74s" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [6cefc1d89e71e379c18e5aa52516846a01821a40f30e4c36ffc3284c14d3b8f7] <==
	* W0912 21:52:45.484291       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0912 21:52:45.490155       1 node.go:136] Successfully retrieved node IP: 192.168.49.2
	I0912 21:52:45.490180       1 server_others.go:186] Using iptables Proxier.
	I0912 21:52:45.490363       1 server.go:583] Version: v1.18.20
	I0912 21:52:45.490802       1 config.go:315] Starting service config controller
	I0912 21:52:45.490815       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0912 21:52:45.490830       1 config.go:133] Starting endpoints config controller
	I0912 21:52:45.490861       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0912 21:52:45.591021       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0912 21:52:45.591021       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [4fcd707825f5457df5867e94573b2664d92800d2da442d71eb96a2232d825b7f] <==
	* W0912 21:52:26.460449       1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0912 21:52:26.460456       1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0912 21:52:26.626513       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0912 21:52:26.626610       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0912 21:52:26.628833       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0912 21:52:26.630075       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 21:52:26.630135       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 21:52:26.630179       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0912 21:52:26.630789       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 21:52:26.631186       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 21:52:26.631364       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 21:52:26.631611       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0912 21:52:26.632203       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:52:26.633096       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 21:52:26.633223       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0912 21:52:26.633435       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:52:26.633694       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 21:52:26.633781       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0912 21:52:26.633840       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:52:26.633894       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 21:52:27.479445       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0912 21:52:27.509430       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0912 21:52:27.518302       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 21:52:27.560225       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0912 21:52:30.730324       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* Sep 12 21:55:28 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:55:28.454670    1859 pod_workers.go:191] Error syncing pod bef14ecd-aca3-45ef-9f76-5dc75e65c7a4 ("kube-ingress-dns-minikube_kube-system(bef14ecd-aca3-45ef-9f76-5dc75e65c7a4)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 12 21:55:43 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:55:43.454481    1859 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 12 21:55:43 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:55:43.454524    1859 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 12 21:55:43 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:55:43.454567    1859 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 12 21:55:43 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:55:43.454604    1859 pod_workers.go:191] Error syncing pod bef14ecd-aca3-45ef-9f76-5dc75e65c7a4 ("kube-ingress-dns-minikube_kube-system(bef14ecd-aca3-45ef-9f76-5dc75e65c7a4)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 12 21:55:52 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:55:52.041528    1859 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 12 21:55:52 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:55:52.229400    1859 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-5jsqx" (UniqueName: "kubernetes.io/secret/c4937e18-1227-4db4-8839-da8eede0285b-default-token-5jsqx") pod "hello-world-app-5f5d8b66bb-jmjmn" (UID: "c4937e18-1227-4db4-8839-da8eede0285b")
	Sep 12 21:55:52 ingress-addon-legacy-704515 kubelet[1859]: W0912 21:55:52.389684    1859 manager.go:1131] Failed to process watch event {EventType:0 Name:/docker/99af414f51e50fd4903df83e2a93b58e8af3b0d6eb0cd9e0dfc676dd78025f24/crio-72d7a53ffa75448904213a3eaa8d0f905743f885b8b0e3285f4d24f0ccf28419 WatchSource:0}: Error finding container 72d7a53ffa75448904213a3eaa8d0f905743f885b8b0e3285f4d24f0ccf28419: Status 404 returned error &{%!!(MISSING)s(*http.body=&{0xc0008f6080 <nil> <nil> false false {0 0} false false false <nil>}) {%!!(MISSING)s(int32=0) %!!(MISSING)s(uint32=0)} %!!(MISSING)s(bool=false) <nil> %!!(MISSING)s(func(error) error=0x750800) %!!(MISSING)s(func() error=0x750790)}
	Sep 12 21:55:56 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:55:56.454529    1859 remote_image.go:87] ImageStatus "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" from image service failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 12 21:55:56 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:55:56.454568    1859 kuberuntime_image.go:85] ImageStatus for image {"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab"} failed: rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 12 21:55:56 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:55:56.454615    1859 kuberuntime_manager.go:818] container start failed: ImageInspectError: Failed to inspect image "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab": rpc error: code = Unknown desc = short-name "cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	Sep 12 21:55:56 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:55:56.454644    1859 pod_workers.go:191] Error syncing pod bef14ecd-aca3-45ef-9f76-5dc75e65c7a4 ("kube-ingress-dns-minikube_kube-system(bef14ecd-aca3-45ef-9f76-5dc75e65c7a4)"), skipping: failed to "StartContainer" for "minikube-ingress-dns" with ImageInspectError: "Failed to inspect image \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\": rpc error: code = Unknown desc = short-name \"cryptexlabs/minikube-ingress-dns:0.3.0@sha256:e252d2a4c704027342b303cc563e95d2e71d2a0f1404f55d676390e28d5093ab\" did not resolve to an alias and no unqualified-search registries are defined in \"/etc/containers/registries.conf\""
	Sep 12 21:56:07 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:56:07.765124    1859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-hjfn9" (UniqueName: "kubernetes.io/secret/bef14ecd-aca3-45ef-9f76-5dc75e65c7a4-minikube-ingress-dns-token-hjfn9") pod "bef14ecd-aca3-45ef-9f76-5dc75e65c7a4" (UID: "bef14ecd-aca3-45ef-9f76-5dc75e65c7a4")
	Sep 12 21:56:07 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:56:07.766982    1859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bef14ecd-aca3-45ef-9f76-5dc75e65c7a4-minikube-ingress-dns-token-hjfn9" (OuterVolumeSpecName: "minikube-ingress-dns-token-hjfn9") pod "bef14ecd-aca3-45ef-9f76-5dc75e65c7a4" (UID: "bef14ecd-aca3-45ef-9f76-5dc75e65c7a4"). InnerVolumeSpecName "minikube-ingress-dns-token-hjfn9". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 12 21:56:07 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:56:07.865384    1859 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-hjfn9" (UniqueName: "kubernetes.io/secret/bef14ecd-aca3-45ef-9f76-5dc75e65c7a4-minikube-ingress-dns-token-hjfn9") on node "ingress-addon-legacy-704515" DevicePath ""
	Sep 12 21:56:09 ingress-addon-legacy-704515 kubelet[1859]: W0912 21:56:09.786735    1859 pod_container_deletor.go:77] Container "0460e6229f49ec1a3a2bb22c9d2815f5060cdd9be200c3ef021c312fb7710f53" not found in pod's containers
	Sep 12 21:56:10 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:56:10.207334    1859 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-tc48k.1784457eaf346409", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-tc48k", UID:"c4533e22-0e99-43ad-a014-18303755a9bd", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-704515"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138557e8c442009, ext:221167314626, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138557e8c442009, ext:221167314626, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-tc48k.1784457eaf346409" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 12 21:56:10 ingress-addon-legacy-704515 kubelet[1859]: E0912 21:56:10.210885    1859 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-tc48k.1784457eaf346409", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-tc48k", UID:"c4533e22-0e99-43ad-a014-18303755a9bd", APIVersion:"v1", ResourceVersion:"466", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-704515"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc138557e8c442009, ext:221167314626, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc138557e8c6635b6, ext:221169548411, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-tc48k.1784457eaf346409" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 12 21:56:12 ingress-addon-legacy-704515 kubelet[1859]: W0912 21:56:12.792285    1859 pod_container_deletor.go:77] Container "09260c8a13bbc8fa0bc8e5afbb17184c6d1aab8ecf0f1d63bbb86affc65e0079" not found in pod's containers
	Sep 12 21:56:14 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:56:14.330273    1859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/c4533e22-0e99-43ad-a014-18303755a9bd-webhook-cert") pod "c4533e22-0e99-43ad-a014-18303755a9bd" (UID: "c4533e22-0e99-43ad-a014-18303755a9bd")
	Sep 12 21:56:14 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:56:14.330327    1859 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-8pr76" (UniqueName: "kubernetes.io/secret/c4533e22-0e99-43ad-a014-18303755a9bd-ingress-nginx-token-8pr76") pod "c4533e22-0e99-43ad-a014-18303755a9bd" (UID: "c4533e22-0e99-43ad-a014-18303755a9bd")
	Sep 12 21:56:14 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:56:14.332218    1859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4533e22-0e99-43ad-a014-18303755a9bd-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c4533e22-0e99-43ad-a014-18303755a9bd" (UID: "c4533e22-0e99-43ad-a014-18303755a9bd"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 12 21:56:14 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:56:14.332373    1859 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4533e22-0e99-43ad-a014-18303755a9bd-ingress-nginx-token-8pr76" (OuterVolumeSpecName: "ingress-nginx-token-8pr76") pod "c4533e22-0e99-43ad-a014-18303755a9bd" (UID: "c4533e22-0e99-43ad-a014-18303755a9bd"). InnerVolumeSpecName "ingress-nginx-token-8pr76". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 12 21:56:14 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:56:14.430577    1859 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/c4533e22-0e99-43ad-a014-18303755a9bd-webhook-cert") on node "ingress-addon-legacy-704515" DevicePath ""
	Sep 12 21:56:14 ingress-addon-legacy-704515 kubelet[1859]: I0912 21:56:14.430606    1859 reconciler.go:319] Volume detached for volume "ingress-nginx-token-8pr76" (UniqueName: "kubernetes.io/secret/c4533e22-0e99-43ad-a014-18303755a9bd-ingress-nginx-token-8pr76") on node "ingress-addon-legacy-704515" DevicePath ""
	
	* 
	* ==> storage-provisioner [915de5d5fcb3fbc70592ada856ebbfc768ddf9f7ce161186b5b469e5bce019c5] <==
	* I0912 21:52:54.284159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0912 21:52:54.291931       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0912 21:52:54.291978       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0912 21:52:54.325338       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0912 21:52:54.325513       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-704515_b02c7171-5bb8-469c-80bb-bd5d6ca13e18!
	I0912 21:52:54.325493       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"92d0effe-18cf-43ce-8dc0-13ed632bfdc6", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-704515_b02c7171-5bb8-469c-80bb-bd5d6ca13e18 became leader
	I0912 21:52:54.425740       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-704515_b02c7171-5bb8-469c-80bb-bd5d6ca13e18!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-704515 -n ingress-addon-legacy-704515
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-704515 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (182.38s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-2lnnj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-2lnnj -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-2lnnj -- sh -c "ping -c 1 192.168.58.1": exit status 1 (160.028194ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-2lnnj): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-4qwb4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-4qwb4 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-4qwb4 -- sh -c "ping -c 1 192.168.58.1": exit status 1 (160.673101ms)

                                                
                                                
-- stdout --
	PING 192.168.58.1 (192.168.58.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.58.1) from pod (busybox-5bc68d56bd-4qwb4): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-947523
helpers_test.go:235: (dbg) docker inspect multinode-947523:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281",
	        "Created": "2023-09-12T22:00:50.719127032Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 106889,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-12T22:00:51.002831465Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0508862d812894c98deaaf3533e6d3386b479df1d249d4410a6247f1f44ad45d",
	        "ResolvConfPath": "/var/lib/docker/containers/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/hostname",
	        "HostsPath": "/var/lib/docker/containers/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/hosts",
	        "LogPath": "/var/lib/docker/containers/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281-json.log",
	        "Name": "/multinode-947523",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-947523:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-947523",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2cb0bbab35f29a136127b307bcfd1fea3e531cecc0a1117cb1b09d6d72b616ee-init/diff:/var/lib/docker/overlay2/27d59bddd44498ba277aabbca5bbef44e363739d94cbe3a544670a142640c048/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2cb0bbab35f29a136127b307bcfd1fea3e531cecc0a1117cb1b09d6d72b616ee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2cb0bbab35f29a136127b307bcfd1fea3e531cecc0a1117cb1b09d6d72b616ee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2cb0bbab35f29a136127b307bcfd1fea3e531cecc0a1117cb1b09d6d72b616ee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-947523",
	                "Source": "/var/lib/docker/volumes/multinode-947523/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-947523",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-947523",
	                "name.minikube.sigs.k8s.io": "multinode-947523",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d17ad4576bfbcba841a28ad5cc7f1c1c8c5dd155aa9b59d0fdadafc8d1691524",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32847"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32846"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32845"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32844"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/d17ad4576bfb",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-947523": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1fdd02d2728f",
	                        "multinode-947523"
	                    ],
	                    "NetworkID": "dd1ba5635088eed19d29b6c5bbb18c9a642a9874a278a38fa1d98713b580e7a3",
	                    "EndpointID": "c8017c075b9875cf2a61431b6a11c8e8b1272cd8a31b7cc71ab330edf71a0305",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-947523 -n multinode-947523
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-947523 logs -n 25: (1.205844056s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-899248                           | mount-start-2-899248 | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:00 UTC |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| ssh     | mount-start-2-899248 ssh -- ls                    | mount-start-2-899248 | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:00 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-883342                           | mount-start-1-883342 | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:00 UTC |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-899248 ssh -- ls                    | mount-start-2-899248 | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:00 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-899248                           | mount-start-2-899248 | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:00 UTC |
	| start   | -p mount-start-2-899248                           | mount-start-2-899248 | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:00 UTC |
	| ssh     | mount-start-2-899248 ssh -- ls                    | mount-start-2-899248 | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:00 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-899248                           | mount-start-2-899248 | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:00 UTC |
	| delete  | -p mount-start-1-883342                           | mount-start-1-883342 | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:00 UTC |
	| start   | -p multinode-947523                               | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:00 UTC | 12 Sep 23 22:01 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- apply -f                   | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- rollout                    | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- get pods -o                | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- get pods -o                | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | busybox-5bc68d56bd-2lnnj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | busybox-5bc68d56bd-4qwb4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | busybox-5bc68d56bd-2lnnj --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | busybox-5bc68d56bd-4qwb4 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | busybox-5bc68d56bd-2lnnj -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | busybox-5bc68d56bd-4qwb4 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- get pods -o                | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | busybox-5bc68d56bd-2lnnj                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC |                     |
	|         | busybox-5bc68d56bd-2lnnj -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC | 12 Sep 23 22:01 UTC |
	|         | busybox-5bc68d56bd-4qwb4                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-947523 -- exec                       | multinode-947523     | jenkins | v1.31.2 | 12 Sep 23 22:01 UTC |                     |
	|         | busybox-5bc68d56bd-4qwb4 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.58.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 22:00:44
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:00:44.909204  106287 out.go:296] Setting OutFile to fd 1 ...
	I0912 22:00:44.909482  106287 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:00:44.909492  106287 out.go:309] Setting ErrFile to fd 2...
	I0912 22:00:44.909497  106287 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:00:44.909743  106287 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 22:00:44.910348  106287 out.go:303] Setting JSON to false
	I0912 22:00:44.911586  106287 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6193,"bootTime":1694549852,"procs":572,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:00:44.911663  106287 start.go:138] virtualization: kvm guest
	I0912 22:00:44.913741  106287 out.go:177] * [multinode-947523] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:00:44.915212  106287 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 22:00:44.915214  106287 notify.go:220] Checking for updates...
	I0912 22:00:44.916560  106287 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:00:44.917788  106287 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:00:44.919113  106287 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 22:00:44.920373  106287 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:00:44.921655  106287 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:00:44.923146  106287 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 22:00:44.944478  106287 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 22:00:44.944570  106287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:00:44.995538  106287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-12 22:00:44.987056683 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:00:44.995626  106287 docker.go:294] overlay module found
	I0912 22:00:44.997304  106287 out.go:177] * Using the docker driver based on user configuration
	I0912 22:00:44.998580  106287 start.go:298] selected driver: docker
	I0912 22:00:44.998589  106287 start.go:902] validating driver "docker" against <nil>
	I0912 22:00:44.998598  106287 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:00:44.999318  106287 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:00:45.050281  106287 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2023-09-12 22:00:45.042458308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:00:45.050489  106287 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 22:00:45.050766  106287 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:00:45.052353  106287 out.go:177] * Using Docker driver with root privileges
	I0912 22:00:45.053777  106287 cni.go:84] Creating CNI manager for ""
	I0912 22:00:45.053799  106287 cni.go:136] 0 nodes found, recommending kindnet
	I0912 22:00:45.053808  106287 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 22:00:45.053825  106287 start_flags.go:321] config:
	{Name:multinode-947523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-947523 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:00:45.055495  106287 out.go:177] * Starting control plane node multinode-947523 in cluster multinode-947523
	I0912 22:00:45.056709  106287 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 22:00:45.058015  106287 out.go:177] * Pulling base image ...
	I0912 22:00:45.059185  106287 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:00:45.059219  106287 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 22:00:45.059240  106287 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0912 22:00:45.059262  106287 cache.go:57] Caching tarball of preloaded images
	I0912 22:00:45.059393  106287 preload.go:174] Found /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:00:45.059409  106287 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0912 22:00:45.060296  106287 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/config.json ...
	I0912 22:00:45.060338  106287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/config.json: {Name:mkc0a98934230f93d1a18d354c1ebc67d8d09007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:00:45.075285  106287 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0912 22:00:45.075307  106287 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	I0912 22:00:45.075325  106287 cache.go:195] Successfully downloaded all kic artifacts
	I0912 22:00:45.075351  106287 start.go:365] acquiring machines lock for multinode-947523: {Name:mk7c083938172db4854a58c0d57ce0d9d8c74a66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:00:45.075438  106287 start.go:369] acquired machines lock for "multinode-947523" in 71.598µs
	I0912 22:00:45.075458  106287 start.go:93] Provisioning new machine with config: &{Name:multinode-947523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-947523 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:00:45.075523  106287 start.go:125] createHost starting for "" (driver="docker")
	I0912 22:00:45.077148  106287 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0912 22:00:45.077330  106287 start.go:159] libmachine.API.Create for "multinode-947523" (driver="docker")
	I0912 22:00:45.077351  106287 client.go:168] LocalClient.Create starting
	I0912 22:00:45.077405  106287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem
	I0912 22:00:45.077435  106287 main.go:141] libmachine: Decoding PEM data...
	I0912 22:00:45.077457  106287 main.go:141] libmachine: Parsing certificate...
	I0912 22:00:45.077507  106287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem
	I0912 22:00:45.077524  106287 main.go:141] libmachine: Decoding PEM data...
	I0912 22:00:45.077533  106287 main.go:141] libmachine: Parsing certificate...
	I0912 22:00:45.077807  106287 cli_runner.go:164] Run: docker network inspect multinode-947523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 22:00:45.092972  106287 cli_runner.go:211] docker network inspect multinode-947523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 22:00:45.093055  106287 network_create.go:281] running [docker network inspect multinode-947523] to gather additional debugging logs...
	I0912 22:00:45.093076  106287 cli_runner.go:164] Run: docker network inspect multinode-947523
	W0912 22:00:45.107962  106287 cli_runner.go:211] docker network inspect multinode-947523 returned with exit code 1
	I0912 22:00:45.107991  106287 network_create.go:284] error running [docker network inspect multinode-947523]: docker network inspect multinode-947523: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-947523 not found
	I0912 22:00:45.108009  106287 network_create.go:286] output of [docker network inspect multinode-947523]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-947523 not found
	
	** /stderr **
	I0912 22:00:45.108080  106287 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:00:45.123393  106287 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-38edbaf277f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:05:77:7e:89} reservation:<nil>}
	I0912 22:00:45.123976  106287 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00125b210}
	I0912 22:00:45.124004  106287 network_create.go:123] attempt to create docker network multinode-947523 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0912 22:00:45.124054  106287 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-947523 multinode-947523
	I0912 22:00:45.172531  106287 network_create.go:107] docker network multinode-947523 192.168.58.0/24 created
	I0912 22:00:45.172561  106287 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-947523" container
	I0912 22:00:45.172641  106287 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 22:00:45.188787  106287 cli_runner.go:164] Run: docker volume create multinode-947523 --label name.minikube.sigs.k8s.io=multinode-947523 --label created_by.minikube.sigs.k8s.io=true
	I0912 22:00:45.204689  106287 oci.go:103] Successfully created a docker volume multinode-947523
	I0912 22:00:45.204752  106287 cli_runner.go:164] Run: docker run --rm --name multinode-947523-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-947523 --entrypoint /usr/bin/test -v multinode-947523:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib
	I0912 22:00:45.719571  106287 oci.go:107] Successfully prepared a docker volume multinode-947523
	I0912 22:00:45.719624  106287 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:00:45.719646  106287 kic.go:190] Starting extracting preloaded images to volume ...
	I0912 22:00:45.719717  106287 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-947523:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 22:00:50.654126  106287 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-947523:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir: (4.934357013s)
	I0912 22:00:50.654157  106287 kic.go:199] duration metric: took 4.934508 seconds to extract preloaded images to volume
	W0912 22:00:50.654280  106287 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0912 22:00:50.654367  106287 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 22:00:50.704753  106287 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-947523 --name multinode-947523 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-947523 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-947523 --network multinode-947523 --ip 192.168.58.2 --volume multinode-947523:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0912 22:00:51.010296  106287 cli_runner.go:164] Run: docker container inspect multinode-947523 --format={{.State.Running}}
	I0912 22:00:51.027316  106287 cli_runner.go:164] Run: docker container inspect multinode-947523 --format={{.State.Status}}
	I0912 22:00:51.044901  106287 cli_runner.go:164] Run: docker exec multinode-947523 stat /var/lib/dpkg/alternatives/iptables
	I0912 22:00:51.097207  106287 oci.go:144] the created container "multinode-947523" has a running status.
	I0912 22:00:51.097238  106287 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa...
	I0912 22:00:51.354415  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0912 22:00:51.354460  106287 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 22:00:51.380753  106287 cli_runner.go:164] Run: docker container inspect multinode-947523 --format={{.State.Status}}
	I0912 22:00:51.398275  106287 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 22:00:51.398296  106287 kic_runner.go:114] Args: [docker exec --privileged multinode-947523 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 22:00:51.473532  106287 cli_runner.go:164] Run: docker container inspect multinode-947523 --format={{.State.Status}}
	I0912 22:00:51.494307  106287 machine.go:88] provisioning docker machine ...
	I0912 22:00:51.494349  106287 ubuntu.go:169] provisioning hostname "multinode-947523"
	I0912 22:00:51.494409  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:00:51.509907  106287 main.go:141] libmachine: Using SSH client type: native
	I0912 22:00:51.510319  106287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0912 22:00:51.510338  106287 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-947523 && echo "multinode-947523" | sudo tee /etc/hostname
	I0912 22:00:51.670846  106287 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-947523
	
	I0912 22:00:51.670909  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:00:51.688472  106287 main.go:141] libmachine: Using SSH client type: native
	I0912 22:00:51.688827  106287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0912 22:00:51.688852  106287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-947523' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-947523/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-947523' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:00:51.828555  106287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:00:51.828578  106287 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17194-15878/.minikube CaCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17194-15878/.minikube}
	I0912 22:00:51.828623  106287 ubuntu.go:177] setting up certificates
	I0912 22:00:51.828634  106287 provision.go:83] configureAuth start
	I0912 22:00:51.828693  106287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-947523
	I0912 22:00:51.844622  106287 provision.go:138] copyHostCerts
	I0912 22:00:51.844658  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 22:00:51.844683  106287 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem, removing ...
	I0912 22:00:51.844689  106287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 22:00:51.844753  106287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem (1082 bytes)
	I0912 22:00:51.844839  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 22:00:51.844858  106287 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem, removing ...
	I0912 22:00:51.844863  106287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 22:00:51.844888  106287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem (1123 bytes)
	I0912 22:00:51.844938  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 22:00:51.844958  106287 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem, removing ...
	I0912 22:00:51.844961  106287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 22:00:51.844990  106287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem (1679 bytes)
	I0912 22:00:51.845044  106287 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem org=jenkins.multinode-947523 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-947523]
	I0912 22:00:52.029595  106287 provision.go:172] copyRemoteCerts
	I0912 22:00:52.029653  106287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:00:52.029686  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:00:52.045722  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa Username:docker}
	I0912 22:00:52.140711  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 22:00:52.140773  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:00:52.161820  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 22:00:52.161871  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0912 22:00:52.182106  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 22:00:52.182164  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:00:52.203365  106287 provision.go:86] duration metric: configureAuth took 374.713223ms
	I0912 22:00:52.203392  106287 ubuntu.go:193] setting minikube options for container-runtime
	I0912 22:00:52.203572  106287 config.go:182] Loaded profile config "multinode-947523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:00:52.203659  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:00:52.220436  106287 main.go:141] libmachine: Using SSH client type: native
	I0912 22:00:52.220871  106287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32847 <nil> <nil>}
	I0912 22:00:52.220900  106287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:00:52.435583  106287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:00:52.435610  106287 machine.go:91] provisioned docker machine in 941.277477ms
	I0912 22:00:52.435622  106287 client.go:171] LocalClient.Create took 7.358265249s
	I0912 22:00:52.435650  106287 start.go:167] duration metric: libmachine.API.Create for "multinode-947523" took 7.358321315s
	I0912 22:00:52.435659  106287 start.go:300] post-start starting for "multinode-947523" (driver="docker")
	I0912 22:00:52.435667  106287 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:00:52.435721  106287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:00:52.435763  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:00:52.452305  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa Username:docker}
	I0912 22:00:52.549001  106287 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:00:52.551803  106287 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0912 22:00:52.551824  106287 command_runner.go:130] > NAME="Ubuntu"
	I0912 22:00:52.551833  106287 command_runner.go:130] > VERSION_ID="22.04"
	I0912 22:00:52.551846  106287 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0912 22:00:52.551856  106287 command_runner.go:130] > VERSION_CODENAME=jammy
	I0912 22:00:52.551866  106287 command_runner.go:130] > ID=ubuntu
	I0912 22:00:52.551874  106287 command_runner.go:130] > ID_LIKE=debian
	I0912 22:00:52.551880  106287 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0912 22:00:52.551888  106287 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0912 22:00:52.551896  106287 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0912 22:00:52.551906  106287 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0912 22:00:52.551913  106287 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0912 22:00:52.551974  106287 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 22:00:52.551999  106287 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 22:00:52.552009  106287 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 22:00:52.552018  106287 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0912 22:00:52.552029  106287 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/addons for local assets ...
	I0912 22:00:52.552077  106287 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/files for local assets ...
	I0912 22:00:52.552142  106287 filesync.go:149] local asset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> 226982.pem in /etc/ssl/certs
	I0912 22:00:52.552152  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> /etc/ssl/certs/226982.pem
	I0912 22:00:52.552228  106287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:00:52.559645  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:00:52.579949  106287 start.go:303] post-start completed in 144.276317ms
	I0912 22:00:52.580315  106287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-947523
	I0912 22:00:52.597150  106287 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/config.json ...
	I0912 22:00:52.597378  106287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:00:52.597414  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:00:52.612773  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa Username:docker}
	I0912 22:00:52.705066  106287 command_runner.go:130] > 22%!
	(MISSING)I0912 22:00:52.705153  106287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 22:00:52.708924  106287 command_runner.go:130] > 228G
	I0912 22:00:52.709065  106287 start.go:128] duration metric: createHost completed in 7.633531095s
	I0912 22:00:52.709084  106287 start.go:83] releasing machines lock for "multinode-947523", held for 7.633634743s
	I0912 22:00:52.709152  106287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-947523
	I0912 22:00:52.724765  106287 ssh_runner.go:195] Run: cat /version.json
	I0912 22:00:52.724820  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:00:52.724853  106287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:00:52.724909  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:00:52.741137  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa Username:docker}
	I0912 22:00:52.741479  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa Username:docker}
	I0912 22:00:52.836248  106287 command_runner.go:130] > {"iso_version": "v1.31.0-1694081706-17207", "kicbase_version": "v0.0.40-1694457807-17194", "minikube_version": "v1.31.2", "commit": "d1c06690fcfd58598aad653e491fbf7a09089c48"}
	I0912 22:00:52.836362  106287 ssh_runner.go:195] Run: systemctl --version
	I0912 22:00:52.923177  106287 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 22:00:52.923213  106287 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.9)
	I0912 22:00:52.923230  106287 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0912 22:00:52.923286  106287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:00:53.058834  106287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:00:53.063071  106287 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0912 22:00:53.063102  106287 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0912 22:00:53.063118  106287 command_runner.go:130] > Device: 36h/54d	Inode: 552137      Links: 1
	I0912 22:00:53.063130  106287 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:00:53.063141  106287 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0912 22:00:53.063152  106287 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0912 22:00:53.063164  106287 command_runner.go:130] > Change: 2023-09-12 21:43:42.991834785 +0000
	I0912 22:00:53.063169  106287 command_runner.go:130] >  Birth: 2023-09-12 21:43:42.991834785 +0000
	I0912 22:00:53.063292  106287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:00:53.080076  106287 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0912 22:00:53.080154  106287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:00:53.106262  106287 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0912 22:00:53.106314  106287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0912 22:00:53.106328  106287 start.go:469] detecting cgroup driver to use...
	I0912 22:00:53.106358  106287 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0912 22:00:53.106398  106287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:00:53.119863  106287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:00:53.129669  106287 docker.go:196] disabling cri-docker service (if available) ...
	I0912 22:00:53.129727  106287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:00:53.141614  106287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:00:53.153068  106287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:00:53.235919  106287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:00:53.313395  106287 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0912 22:00:53.313426  106287 docker.go:212] disabling docker service ...
	I0912 22:00:53.313462  106287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:00:53.330162  106287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:00:53.340022  106287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:00:53.416514  106287 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0912 22:00:53.416579  106287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:00:53.497870  106287 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0912 22:00:53.497932  106287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:00:53.507706  106287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:00:53.521197  106287 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0912 22:00:53.521233  106287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0912 22:00:53.521279  106287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:00:53.529618  106287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:00:53.529665  106287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:00:53.537902  106287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:00:53.546094  106287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:00:53.554166  106287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:00:53.561910  106287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:00:53.568967  106287 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 22:00:53.569025  106287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:00:53.575926  106287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:00:53.650921  106287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:00:53.745195  106287 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:00:53.745246  106287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:00:53.748385  106287 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0912 22:00:53.748403  106287 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0912 22:00:53.748410  106287 command_runner.go:130] > Device: 40h/64d	Inode: 190         Links: 1
	I0912 22:00:53.748417  106287 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:00:53.748424  106287 command_runner.go:130] > Access: 2023-09-12 22:00:53.731723502 +0000
	I0912 22:00:53.748429  106287 command_runner.go:130] > Modify: 2023-09-12 22:00:53.731723502 +0000
	I0912 22:00:53.748434  106287 command_runner.go:130] > Change: 2023-09-12 22:00:53.731723502 +0000
	I0912 22:00:53.748440  106287 command_runner.go:130] >  Birth: -
	I0912 22:00:53.748459  106287 start.go:537] Will wait 60s for crictl version
	I0912 22:00:53.748494  106287 ssh_runner.go:195] Run: which crictl
	I0912 22:00:53.751241  106287 command_runner.go:130] > /usr/bin/crictl
	I0912 22:00:53.751340  106287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:00:53.780124  106287 command_runner.go:130] > Version:  0.1.0
	I0912 22:00:53.780145  106287 command_runner.go:130] > RuntimeName:  cri-o
	I0912 22:00:53.780153  106287 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0912 22:00:53.780161  106287 command_runner.go:130] > RuntimeApiVersion:  v1
	I0912 22:00:53.782290  106287 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0912 22:00:53.782375  106287 ssh_runner.go:195] Run: crio --version
	I0912 22:00:53.813196  106287 command_runner.go:130] > crio version 1.24.6
	I0912 22:00:53.813222  106287 command_runner.go:130] > Version:          1.24.6
	I0912 22:00:53.813233  106287 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0912 22:00:53.813241  106287 command_runner.go:130] > GitTreeState:     clean
	I0912 22:00:53.813247  106287 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0912 22:00:53.813253  106287 command_runner.go:130] > GoVersion:        go1.18.2
	I0912 22:00:53.813256  106287 command_runner.go:130] > Compiler:         gc
	I0912 22:00:53.813261  106287 command_runner.go:130] > Platform:         linux/amd64
	I0912 22:00:53.813269  106287 command_runner.go:130] > Linkmode:         dynamic
	I0912 22:00:53.813284  106287 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0912 22:00:53.813295  106287 command_runner.go:130] > SeccompEnabled:   true
	I0912 22:00:53.813306  106287 command_runner.go:130] > AppArmorEnabled:  false
	I0912 22:00:53.814567  106287 ssh_runner.go:195] Run: crio --version
	I0912 22:00:53.846265  106287 command_runner.go:130] > crio version 1.24.6
	I0912 22:00:53.846284  106287 command_runner.go:130] > Version:          1.24.6
	I0912 22:00:53.846291  106287 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0912 22:00:53.846296  106287 command_runner.go:130] > GitTreeState:     clean
	I0912 22:00:53.846301  106287 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0912 22:00:53.846306  106287 command_runner.go:130] > GoVersion:        go1.18.2
	I0912 22:00:53.846310  106287 command_runner.go:130] > Compiler:         gc
	I0912 22:00:53.846314  106287 command_runner.go:130] > Platform:         linux/amd64
	I0912 22:00:53.846321  106287 command_runner.go:130] > Linkmode:         dynamic
	I0912 22:00:53.846328  106287 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0912 22:00:53.846336  106287 command_runner.go:130] > SeccompEnabled:   true
	I0912 22:00:53.846340  106287 command_runner.go:130] > AppArmorEnabled:  false
	I0912 22:00:53.848160  106287 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0912 22:00:53.849545  106287 cli_runner.go:164] Run: docker network inspect multinode-947523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:00:53.864988  106287 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0912 22:00:53.868269  106287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:00:53.877673  106287 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:00:53.877742  106287 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:00:53.924243  106287 command_runner.go:130] > {
	I0912 22:00:53.924268  106287 command_runner.go:130] >   "images": [
	I0912 22:00:53.924275  106287 command_runner.go:130] >     {
	I0912 22:00:53.924283  106287 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0912 22:00:53.924289  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.924294  106287 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0912 22:00:53.924298  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924302  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.924313  106287 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0912 22:00:53.924325  106287 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0912 22:00:53.924332  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924340  106287 command_runner.go:130] >       "size": "65249302",
	I0912 22:00:53.924351  106287 command_runner.go:130] >       "uid": null,
	I0912 22:00:53.924362  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.924371  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.924378  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.924382  106287 command_runner.go:130] >     },
	I0912 22:00:53.924388  106287 command_runner.go:130] >     {
	I0912 22:00:53.924394  106287 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0912 22:00:53.924401  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.924406  106287 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0912 22:00:53.924415  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924422  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.924439  106287 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0912 22:00:53.924456  106287 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0912 22:00:53.924465  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924477  106287 command_runner.go:130] >       "size": "31470524",
	I0912 22:00:53.924484  106287 command_runner.go:130] >       "uid": null,
	I0912 22:00:53.924489  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.924495  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.924501  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.924511  106287 command_runner.go:130] >     },
	I0912 22:00:53.924520  106287 command_runner.go:130] >     {
	I0912 22:00:53.924534  106287 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0912 22:00:53.924544  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.924556  106287 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0912 22:00:53.924566  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924575  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.924585  106287 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0912 22:00:53.924614  106287 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0912 22:00:53.924624  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924632  106287 command_runner.go:130] >       "size": "53621675",
	I0912 22:00:53.924642  106287 command_runner.go:130] >       "uid": null,
	I0912 22:00:53.924649  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.924658  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.924667  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.924673  106287 command_runner.go:130] >     },
	I0912 22:00:53.924678  106287 command_runner.go:130] >     {
	I0912 22:00:53.924692  106287 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0912 22:00:53.924703  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.924711  106287 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0912 22:00:53.924720  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924731  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.924745  106287 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0912 22:00:53.924757  106287 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0912 22:00:53.924780  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924791  106287 command_runner.go:130] >       "size": "295456551",
	I0912 22:00:53.924803  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.924830  106287 command_runner.go:130] >         "value": "0"
	I0912 22:00:53.924839  106287 command_runner.go:130] >       },
	I0912 22:00:53.924846  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.924853  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.924864  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.924874  106287 command_runner.go:130] >     },
	I0912 22:00:53.924884  106287 command_runner.go:130] >     {
	I0912 22:00:53.924898  106287 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0912 22:00:53.924908  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.924920  106287 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0912 22:00:53.924927  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924932  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.924946  106287 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0912 22:00:53.924963  106287 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0912 22:00:53.924973  106287 command_runner.go:130] >       ],
	I0912 22:00:53.924983  106287 command_runner.go:130] >       "size": "126972880",
	I0912 22:00:53.924993  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.925001  106287 command_runner.go:130] >         "value": "0"
	I0912 22:00:53.925010  106287 command_runner.go:130] >       },
	I0912 22:00:53.925018  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.925023  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.925033  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.925044  106287 command_runner.go:130] >     },
	I0912 22:00:53.925054  106287 command_runner.go:130] >     {
	I0912 22:00:53.925067  106287 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0912 22:00:53.925077  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.925090  106287 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0912 22:00:53.925097  106287 command_runner.go:130] >       ],
	I0912 22:00:53.925102  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.925116  106287 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0912 22:00:53.925133  106287 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0912 22:00:53.925142  106287 command_runner.go:130] >       ],
	I0912 22:00:53.925153  106287 command_runner.go:130] >       "size": "123163446",
	I0912 22:00:53.925162  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.925177  106287 command_runner.go:130] >         "value": "0"
	I0912 22:00:53.925185  106287 command_runner.go:130] >       },
	I0912 22:00:53.925190  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.925200  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.925210  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.925220  106287 command_runner.go:130] >     },
	I0912 22:00:53.925229  106287 command_runner.go:130] >     {
	I0912 22:00:53.925243  106287 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0912 22:00:53.925253  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.925264  106287 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0912 22:00:53.925272  106287 command_runner.go:130] >       ],
	I0912 22:00:53.925277  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.925291  106287 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0912 22:00:53.925307  106287 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0912 22:00:53.925317  106287 command_runner.go:130] >       ],
	I0912 22:00:53.925327  106287 command_runner.go:130] >       "size": "74680215",
	I0912 22:00:53.925337  106287 command_runner.go:130] >       "uid": null,
	I0912 22:00:53.925347  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.925356  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.925363  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.925368  106287 command_runner.go:130] >     },
	I0912 22:00:53.925378  106287 command_runner.go:130] >     {
	I0912 22:00:53.925392  106287 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0912 22:00:53.925403  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.925415  106287 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0912 22:00:53.925424  106287 command_runner.go:130] >       ],
	I0912 22:00:53.925434  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.925492  106287 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0912 22:00:53.925510  106287 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0912 22:00:53.925515  106287 command_runner.go:130] >       ],
	I0912 22:00:53.925521  106287 command_runner.go:130] >       "size": "61477686",
	I0912 22:00:53.925527  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.925535  106287 command_runner.go:130] >         "value": "0"
	I0912 22:00:53.925541  106287 command_runner.go:130] >       },
	I0912 22:00:53.925549  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.925555  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.925562  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.925568  106287 command_runner.go:130] >     },
	I0912 22:00:53.925577  106287 command_runner.go:130] >     {
	I0912 22:00:53.925587  106287 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0912 22:00:53.925597  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.925607  106287 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0912 22:00:53.925617  106287 command_runner.go:130] >       ],
	I0912 22:00:53.925623  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.925637  106287 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0912 22:00:53.925651  106287 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0912 22:00:53.925660  106287 command_runner.go:130] >       ],
	I0912 22:00:53.925666  106287 command_runner.go:130] >       "size": "750414",
	I0912 22:00:53.925676  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.925688  106287 command_runner.go:130] >         "value": "65535"
	I0912 22:00:53.925699  106287 command_runner.go:130] >       },
	I0912 22:00:53.925708  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.925718  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.925729  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.925739  106287 command_runner.go:130] >     }
	I0912 22:00:53.925748  106287 command_runner.go:130] >   ]
	I0912 22:00:53.925757  106287 command_runner.go:130] > }
	I0912 22:00:53.926587  106287 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 22:00:53.926607  106287 crio.go:415] Images already preloaded, skipping extraction
	I0912 22:00:53.926648  106287 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:00:53.954985  106287 command_runner.go:130] > {
	I0912 22:00:53.955008  106287 command_runner.go:130] >   "images": [
	I0912 22:00:53.955015  106287 command_runner.go:130] >     {
	I0912 22:00:53.955024  106287 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0912 22:00:53.955030  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.955035  106287 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0912 22:00:53.955039  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955043  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.955053  106287 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0912 22:00:53.955078  106287 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0912 22:00:53.955089  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955099  106287 command_runner.go:130] >       "size": "65249302",
	I0912 22:00:53.955109  106287 command_runner.go:130] >       "uid": null,
	I0912 22:00:53.955119  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.955128  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.955138  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.955148  106287 command_runner.go:130] >     },
	I0912 22:00:53.955157  106287 command_runner.go:130] >     {
	I0912 22:00:53.955170  106287 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0912 22:00:53.955180  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.955189  106287 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0912 22:00:53.955196  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955202  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.955212  106287 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0912 22:00:53.955225  106287 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0912 22:00:53.955232  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955243  106287 command_runner.go:130] >       "size": "31470524",
	I0912 22:00:53.955253  106287 command_runner.go:130] >       "uid": null,
	I0912 22:00:53.955264  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.955273  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.955283  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.955290  106287 command_runner.go:130] >     },
	I0912 22:00:53.955294  106287 command_runner.go:130] >     {
	I0912 22:00:53.955307  106287 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0912 22:00:53.955318  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.955328  106287 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0912 22:00:53.955337  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955347  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.955363  106287 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0912 22:00:53.955379  106287 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0912 22:00:53.955385  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955391  106287 command_runner.go:130] >       "size": "53621675",
	I0912 22:00:53.955397  106287 command_runner.go:130] >       "uid": null,
	I0912 22:00:53.955409  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.955416  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.955426  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.955435  106287 command_runner.go:130] >     },
	I0912 22:00:53.955442  106287 command_runner.go:130] >     {
	I0912 22:00:53.955455  106287 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0912 22:00:53.955465  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.955475  106287 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0912 22:00:53.955481  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955488  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.955504  106287 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0912 22:00:53.955519  106287 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0912 22:00:53.955537  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955548  106287 command_runner.go:130] >       "size": "295456551",
	I0912 22:00:53.955557  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.955565  106287 command_runner.go:130] >         "value": "0"
	I0912 22:00:53.955571  106287 command_runner.go:130] >       },
	I0912 22:00:53.955582  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.955593  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.955600  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.955610  106287 command_runner.go:130] >     },
	I0912 22:00:53.955620  106287 command_runner.go:130] >     {
	I0912 22:00:53.955633  106287 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0912 22:00:53.955643  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.955659  106287 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0912 22:00:53.955666  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955673  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.955685  106287 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0912 22:00:53.955701  106287 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0912 22:00:53.955711  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955722  106287 command_runner.go:130] >       "size": "126972880",
	I0912 22:00:53.955732  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.955742  106287 command_runner.go:130] >         "value": "0"
	I0912 22:00:53.955751  106287 command_runner.go:130] >       },
	I0912 22:00:53.955761  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.955768  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.955774  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.955782  106287 command_runner.go:130] >     },
	I0912 22:00:53.955792  106287 command_runner.go:130] >     {
	I0912 22:00:53.955803  106287 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0912 22:00:53.955814  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.955824  106287 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0912 22:00:53.955833  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955840  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.955855  106287 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0912 22:00:53.955871  106287 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0912 22:00:53.955882  106287 command_runner.go:130] >       ],
	I0912 22:00:53.955890  106287 command_runner.go:130] >       "size": "123163446",
	I0912 22:00:53.955900  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.955907  106287 command_runner.go:130] >         "value": "0"
	I0912 22:00:53.955916  106287 command_runner.go:130] >       },
	I0912 22:00:53.955924  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.955935  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.955942  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.955950  106287 command_runner.go:130] >     },
	I0912 22:00:53.955957  106287 command_runner.go:130] >     {
	I0912 22:00:53.955966  106287 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0912 22:00:53.955977  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.955986  106287 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0912 22:00:53.955995  106287 command_runner.go:130] >       ],
	I0912 22:00:53.956003  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.956020  106287 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0912 22:00:53.956036  106287 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0912 22:00:53.956044  106287 command_runner.go:130] >       ],
	I0912 22:00:53.956049  106287 command_runner.go:130] >       "size": "74680215",
	I0912 22:00:53.956053  106287 command_runner.go:130] >       "uid": null,
	I0912 22:00:53.956064  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.956070  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.956081  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.956087  106287 command_runner.go:130] >     },
	I0912 22:00:53.956097  106287 command_runner.go:130] >     {
	I0912 22:00:53.956108  106287 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0912 22:00:53.956118  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.956127  106287 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0912 22:00:53.956135  106287 command_runner.go:130] >       ],
	I0912 22:00:53.956139  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.956195  106287 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0912 22:00:53.956213  106287 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0912 22:00:53.956220  106287 command_runner.go:130] >       ],
	I0912 22:00:53.956227  106287 command_runner.go:130] >       "size": "61477686",
	I0912 22:00:53.956237  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.956244  106287 command_runner.go:130] >         "value": "0"
	I0912 22:00:53.956253  106287 command_runner.go:130] >       },
	I0912 22:00:53.956260  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.956270  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.956277  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.956287  106287 command_runner.go:130] >     },
	I0912 22:00:53.956293  106287 command_runner.go:130] >     {
	I0912 22:00:53.956307  106287 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0912 22:00:53.956317  106287 command_runner.go:130] >       "repoTags": [
	I0912 22:00:53.956326  106287 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0912 22:00:53.956334  106287 command_runner.go:130] >       ],
	I0912 22:00:53.956340  106287 command_runner.go:130] >       "repoDigests": [
	I0912 22:00:53.956351  106287 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0912 22:00:53.956361  106287 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0912 22:00:53.956365  106287 command_runner.go:130] >       ],
	I0912 22:00:53.956371  106287 command_runner.go:130] >       "size": "750414",
	I0912 22:00:53.956375  106287 command_runner.go:130] >       "uid": {
	I0912 22:00:53.956384  106287 command_runner.go:130] >         "value": "65535"
	I0912 22:00:53.956387  106287 command_runner.go:130] >       },
	I0912 22:00:53.956392  106287 command_runner.go:130] >       "username": "",
	I0912 22:00:53.956397  106287 command_runner.go:130] >       "spec": null,
	I0912 22:00:53.956401  106287 command_runner.go:130] >       "pinned": false
	I0912 22:00:53.956408  106287 command_runner.go:130] >     }
	I0912 22:00:53.956412  106287 command_runner.go:130] >   ]
	I0912 22:00:53.956416  106287 command_runner.go:130] > }
	I0912 22:00:53.956998  106287 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 22:00:53.957015  106287 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:00:53.957068  106287 ssh_runner.go:195] Run: crio config
	I0912 22:00:53.991402  106287 command_runner.go:130] ! time="2023-09-12 22:00:53.991023002Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0912 22:00:53.991435  106287 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0912 22:00:53.996090  106287 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0912 22:00:53.996117  106287 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0912 22:00:53.996124  106287 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0912 22:00:53.996129  106287 command_runner.go:130] > #
	I0912 22:00:53.996136  106287 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0912 22:00:53.996145  106287 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0912 22:00:53.996152  106287 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0912 22:00:53.996161  106287 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0912 22:00:53.996165  106287 command_runner.go:130] > # reload'.
	I0912 22:00:53.996172  106287 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0912 22:00:53.996180  106287 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0912 22:00:53.996189  106287 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0912 22:00:53.996195  106287 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0912 22:00:53.996201  106287 command_runner.go:130] > [crio]
	I0912 22:00:53.996212  106287 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0912 22:00:53.996225  106287 command_runner.go:130] > # containers images, in this directory.
	I0912 22:00:53.996240  106287 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0912 22:00:53.996250  106287 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0912 22:00:53.996258  106287 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0912 22:00:53.996266  106287 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0912 22:00:53.996275  106287 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0912 22:00:53.996282  106287 command_runner.go:130] > # storage_driver = "vfs"
	I0912 22:00:53.996290  106287 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0912 22:00:53.996298  106287 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0912 22:00:53.996304  106287 command_runner.go:130] > # storage_option = [
	I0912 22:00:53.996308  106287 command_runner.go:130] > # ]
	I0912 22:00:53.996316  106287 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0912 22:00:53.996325  106287 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0912 22:00:53.996332  106287 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0912 22:00:53.996338  106287 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0912 22:00:53.996346  106287 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0912 22:00:53.996353  106287 command_runner.go:130] > # always happen on a node reboot
	I0912 22:00:53.996358  106287 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0912 22:00:53.996365  106287 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0912 22:00:53.996373  106287 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0912 22:00:53.996384  106287 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0912 22:00:53.996392  106287 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0912 22:00:53.996402  106287 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0912 22:00:53.996412  106287 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0912 22:00:53.996419  106287 command_runner.go:130] > # internal_wipe = true
	I0912 22:00:53.996424  106287 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0912 22:00:53.996432  106287 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0912 22:00:53.996439  106287 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0912 22:00:53.996446  106287 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0912 22:00:53.996454  106287 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0912 22:00:53.996461  106287 command_runner.go:130] > [crio.api]
	I0912 22:00:53.996466  106287 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0912 22:00:53.996473  106287 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0912 22:00:53.996478  106287 command_runner.go:130] > # IP address on which the stream server will listen.
	I0912 22:00:53.996484  106287 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0912 22:00:53.996491  106287 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0912 22:00:53.996498  106287 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0912 22:00:53.996506  106287 command_runner.go:130] > # stream_port = "0"
	I0912 22:00:53.996511  106287 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0912 22:00:53.996517  106287 command_runner.go:130] > # stream_enable_tls = false
	I0912 22:00:53.996523  106287 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0912 22:00:53.996530  106287 command_runner.go:130] > # stream_idle_timeout = ""
	I0912 22:00:53.996536  106287 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0912 22:00:53.996544  106287 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0912 22:00:53.996550  106287 command_runner.go:130] > # minutes.
	I0912 22:00:53.996555  106287 command_runner.go:130] > # stream_tls_cert = ""
	I0912 22:00:53.996563  106287 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0912 22:00:53.996571  106287 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0912 22:00:53.996578  106287 command_runner.go:130] > # stream_tls_key = ""
	I0912 22:00:53.996584  106287 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0912 22:00:53.996603  106287 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0912 22:00:53.996613  106287 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0912 22:00:53.996621  106287 command_runner.go:130] > # stream_tls_ca = ""
	I0912 22:00:53.996628  106287 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0912 22:00:53.996635  106287 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0912 22:00:53.996643  106287 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0912 22:00:53.996649  106287 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0912 22:00:53.996674  106287 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0912 22:00:53.996682  106287 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0912 22:00:53.996686  106287 command_runner.go:130] > [crio.runtime]
	I0912 22:00:53.996692  106287 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0912 22:00:53.996698  106287 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0912 22:00:53.996705  106287 command_runner.go:130] > # "nofile=1024:2048"
	I0912 22:00:53.996711  106287 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0912 22:00:53.996717  106287 command_runner.go:130] > # default_ulimits = [
	I0912 22:00:53.996721  106287 command_runner.go:130] > # ]
	I0912 22:00:53.996729  106287 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0912 22:00:53.996735  106287 command_runner.go:130] > # no_pivot = false
	I0912 22:00:53.996742  106287 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0912 22:00:53.996751  106287 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0912 22:00:53.996758  106287 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0912 22:00:53.996764  106287 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0912 22:00:53.996771  106287 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0912 22:00:53.996780  106287 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0912 22:00:53.996787  106287 command_runner.go:130] > # conmon = ""
	I0912 22:00:53.996793  106287 command_runner.go:130] > # Cgroup setting for conmon
	I0912 22:00:53.996802  106287 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0912 22:00:53.996808  106287 command_runner.go:130] > conmon_cgroup = "pod"
	I0912 22:00:53.996814  106287 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0912 22:00:53.996821  106287 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0912 22:00:53.996827  106287 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0912 22:00:53.996833  106287 command_runner.go:130] > # conmon_env = [
	I0912 22:00:53.996837  106287 command_runner.go:130] > # ]
	I0912 22:00:53.996845  106287 command_runner.go:130] > # Additional environment variables to set for all the
	I0912 22:00:53.996850  106287 command_runner.go:130] > # containers. These are overridden if set in the
	I0912 22:00:53.996858  106287 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0912 22:00:53.996864  106287 command_runner.go:130] > # default_env = [
	I0912 22:00:53.996868  106287 command_runner.go:130] > # ]
	I0912 22:00:53.996874  106287 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0912 22:00:53.996880  106287 command_runner.go:130] > # selinux = false
	I0912 22:00:53.996887  106287 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0912 22:00:53.996895  106287 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0912 22:00:53.996903  106287 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0912 22:00:53.996910  106287 command_runner.go:130] > # seccomp_profile = ""
	I0912 22:00:53.996916  106287 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0912 22:00:53.996926  106287 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0912 22:00:53.996935  106287 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0912 22:00:53.996942  106287 command_runner.go:130] > # which might increase security.
	I0912 22:00:53.996946  106287 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0912 22:00:53.996955  106287 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0912 22:00:53.996961  106287 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0912 22:00:53.996969  106287 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0912 22:00:53.996978  106287 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0912 22:00:53.996985  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:00:53.996989  106287 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0912 22:00:53.996997  106287 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0912 22:00:53.997004  106287 command_runner.go:130] > # the cgroup blockio controller.
	I0912 22:00:53.997008  106287 command_runner.go:130] > # blockio_config_file = ""
	I0912 22:00:53.997016  106287 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0912 22:00:53.997021  106287 command_runner.go:130] > # irqbalance daemon.
	I0912 22:00:53.997026  106287 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0912 22:00:53.997035  106287 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0912 22:00:53.997042  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:00:53.997046  106287 command_runner.go:130] > # rdt_config_file = ""
	I0912 22:00:53.997054  106287 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0912 22:00:53.997061  106287 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0912 22:00:53.997067  106287 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0912 22:00:53.997073  106287 command_runner.go:130] > # separate_pull_cgroup = ""
	I0912 22:00:53.997080  106287 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0912 22:00:53.997090  106287 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0912 22:00:53.997097  106287 command_runner.go:130] > # will be added.
	I0912 22:00:53.997101  106287 command_runner.go:130] > # default_capabilities = [
	I0912 22:00:53.997107  106287 command_runner.go:130] > # 	"CHOWN",
	I0912 22:00:53.997111  106287 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0912 22:00:53.997118  106287 command_runner.go:130] > # 	"FSETID",
	I0912 22:00:53.997121  106287 command_runner.go:130] > # 	"FOWNER",
	I0912 22:00:53.997128  106287 command_runner.go:130] > # 	"SETGID",
	I0912 22:00:53.997131  106287 command_runner.go:130] > # 	"SETUID",
	I0912 22:00:53.997137  106287 command_runner.go:130] > # 	"SETPCAP",
	I0912 22:00:53.997142  106287 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0912 22:00:53.997148  106287 command_runner.go:130] > # 	"KILL",
	I0912 22:00:53.997151  106287 command_runner.go:130] > # ]
	I0912 22:00:53.997162  106287 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0912 22:00:53.997171  106287 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0912 22:00:53.997177  106287 command_runner.go:130] > # add_inheritable_capabilities = true
	I0912 22:00:53.997183  106287 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0912 22:00:53.997191  106287 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0912 22:00:53.997197  106287 command_runner.go:130] > # default_sysctls = [
	I0912 22:00:53.997201  106287 command_runner.go:130] > # ]
	I0912 22:00:53.997208  106287 command_runner.go:130] > # List of devices on the host that a
	I0912 22:00:53.997214  106287 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0912 22:00:53.997221  106287 command_runner.go:130] > # allowed_devices = [
	I0912 22:00:53.997225  106287 command_runner.go:130] > # 	"/dev/fuse",
	I0912 22:00:53.997231  106287 command_runner.go:130] > # ]
	I0912 22:00:53.997236  106287 command_runner.go:130] > # List of additional devices. specified as
	I0912 22:00:53.997258  106287 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0912 22:00:53.997265  106287 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0912 22:00:53.997274  106287 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0912 22:00:53.997281  106287 command_runner.go:130] > # additional_devices = [
	I0912 22:00:53.997284  106287 command_runner.go:130] > # ]
	I0912 22:00:53.997291  106287 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0912 22:00:53.997295  106287 command_runner.go:130] > # cdi_spec_dirs = [
	I0912 22:00:53.997302  106287 command_runner.go:130] > # 	"/etc/cdi",
	I0912 22:00:53.997306  106287 command_runner.go:130] > # 	"/var/run/cdi",
	I0912 22:00:53.997312  106287 command_runner.go:130] > # ]
	I0912 22:00:53.997319  106287 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0912 22:00:53.997327  106287 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0912 22:00:53.997333  106287 command_runner.go:130] > # Defaults to false.
	I0912 22:00:53.997338  106287 command_runner.go:130] > # device_ownership_from_security_context = false
	I0912 22:00:53.997347  106287 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0912 22:00:53.997354  106287 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0912 22:00:53.997361  106287 command_runner.go:130] > # hooks_dir = [
	I0912 22:00:53.997366  106287 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0912 22:00:53.997372  106287 command_runner.go:130] > # ]
	I0912 22:00:53.997378  106287 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0912 22:00:53.997387  106287 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0912 22:00:53.997392  106287 command_runner.go:130] > # its default mounts from the following two files:
	I0912 22:00:53.997397  106287 command_runner.go:130] > #
	I0912 22:00:53.997404  106287 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0912 22:00:53.997412  106287 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0912 22:00:53.997421  106287 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0912 22:00:53.997426  106287 command_runner.go:130] > #
	I0912 22:00:53.997432  106287 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0912 22:00:53.997441  106287 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0912 22:00:53.997450  106287 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0912 22:00:53.997457  106287 command_runner.go:130] > #      only add mounts it finds in this file.
	I0912 22:00:53.997463  106287 command_runner.go:130] > #
	I0912 22:00:53.997468  106287 command_runner.go:130] > # default_mounts_file = ""
	I0912 22:00:53.997475  106287 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0912 22:00:53.997481  106287 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0912 22:00:53.997488  106287 command_runner.go:130] > # pids_limit = 0
	I0912 22:00:53.997494  106287 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0912 22:00:53.997502  106287 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0912 22:00:53.997510  106287 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0912 22:00:53.997520  106287 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0912 22:00:53.997526  106287 command_runner.go:130] > # log_size_max = -1
	I0912 22:00:53.997533  106287 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0912 22:00:53.997540  106287 command_runner.go:130] > # log_to_journald = false
	I0912 22:00:53.997546  106287 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0912 22:00:53.997553  106287 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0912 22:00:53.997558  106287 command_runner.go:130] > # Path to directory for container attach sockets.
	I0912 22:00:53.997567  106287 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0912 22:00:53.997573  106287 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0912 22:00:53.997579  106287 command_runner.go:130] > # bind_mount_prefix = ""
	I0912 22:00:53.997584  106287 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0912 22:00:53.997591  106287 command_runner.go:130] > # read_only = false
	I0912 22:00:53.997597  106287 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0912 22:00:53.997605  106287 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0912 22:00:53.997613  106287 command_runner.go:130] > # live configuration reload.
	I0912 22:00:53.997617  106287 command_runner.go:130] > # log_level = "info"
	I0912 22:00:53.997624  106287 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0912 22:00:53.997632  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:00:53.997636  106287 command_runner.go:130] > # log_filter = ""
	I0912 22:00:53.997645  106287 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0912 22:00:53.997654  106287 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0912 22:00:53.997664  106287 command_runner.go:130] > # separated by comma.
	I0912 22:00:53.997670  106287 command_runner.go:130] > # uid_mappings = ""
	I0912 22:00:53.997676  106287 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0912 22:00:53.997682  106287 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0912 22:00:53.997688  106287 command_runner.go:130] > # separated by comma.
	I0912 22:00:53.997693  106287 command_runner.go:130] > # gid_mappings = ""
	I0912 22:00:53.997701  106287 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0912 22:00:53.997709  106287 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0912 22:00:53.997718  106287 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0912 22:00:53.997724  106287 command_runner.go:130] > # minimum_mappable_uid = -1
	I0912 22:00:53.997731  106287 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0912 22:00:53.997739  106287 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0912 22:00:53.997747  106287 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0912 22:00:53.997753  106287 command_runner.go:130] > # minimum_mappable_gid = -1
	I0912 22:00:53.997759  106287 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0912 22:00:53.997768  106287 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0912 22:00:53.997776  106287 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0912 22:00:53.997781  106287 command_runner.go:130] > # ctr_stop_timeout = 30
	I0912 22:00:53.997789  106287 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0912 22:00:53.997798  106287 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0912 22:00:53.997805  106287 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0912 22:00:53.997813  106287 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0912 22:00:53.997818  106287 command_runner.go:130] > # drop_infra_ctr = true
	I0912 22:00:53.997824  106287 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0912 22:00:53.997832  106287 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0912 22:00:53.997840  106287 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0912 22:00:53.997846  106287 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0912 22:00:53.997852  106287 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0912 22:00:53.997859  106287 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0912 22:00:53.997866  106287 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0912 22:00:53.997873  106287 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0912 22:00:53.997879  106287 command_runner.go:130] > # pinns_path = ""
	I0912 22:00:53.997885  106287 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0912 22:00:53.997894  106287 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0912 22:00:53.997902  106287 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0912 22:00:53.997909  106287 command_runner.go:130] > # default_runtime = "runc"
	I0912 22:00:53.997917  106287 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0912 22:00:53.997924  106287 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0912 22:00:53.997935  106287 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0912 22:00:53.997942  106287 command_runner.go:130] > # creation as a file is not desired either.
	I0912 22:00:53.997950  106287 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0912 22:00:53.997958  106287 command_runner.go:130] > # the hostname is being managed dynamically.
	I0912 22:00:53.997962  106287 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0912 22:00:53.997968  106287 command_runner.go:130] > # ]
	I0912 22:00:53.997975  106287 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0912 22:00:53.997983  106287 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0912 22:00:53.997992  106287 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0912 22:00:53.998000  106287 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0912 22:00:53.998005  106287 command_runner.go:130] > #
	I0912 22:00:53.998010  106287 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0912 22:00:53.998017  106287 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0912 22:00:53.998021  106287 command_runner.go:130] > #  runtime_type = "oci"
	I0912 22:00:53.998028  106287 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0912 22:00:53.998033  106287 command_runner.go:130] > #  privileged_without_host_devices = false
	I0912 22:00:53.998040  106287 command_runner.go:130] > #  allowed_annotations = []
	I0912 22:00:53.998043  106287 command_runner.go:130] > # Where:
	I0912 22:00:53.998051  106287 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0912 22:00:53.998057  106287 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0912 22:00:53.998066  106287 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0912 22:00:53.998074  106287 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0912 22:00:53.998078  106287 command_runner.go:130] > #   in $PATH.
	I0912 22:00:53.998084  106287 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0912 22:00:53.998092  106287 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0912 22:00:53.998098  106287 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0912 22:00:53.998104  106287 command_runner.go:130] > #   state.
	I0912 22:00:53.998110  106287 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0912 22:00:53.998118  106287 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0912 22:00:53.998124  106287 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0912 22:00:53.998132  106287 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0912 22:00:53.998141  106287 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0912 22:00:53.998148  106287 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0912 22:00:53.998156  106287 command_runner.go:130] > #   The currently recognized values are:
	I0912 22:00:53.998163  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0912 22:00:53.998172  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0912 22:00:53.998180  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0912 22:00:53.998189  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0912 22:00:53.998198  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0912 22:00:53.998206  106287 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0912 22:00:53.998215  106287 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0912 22:00:53.998223  106287 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0912 22:00:53.998231  106287 command_runner.go:130] > #   should be moved to the container's cgroup
	I0912 22:00:53.998235  106287 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0912 22:00:53.998242  106287 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0912 22:00:53.998246  106287 command_runner.go:130] > runtime_type = "oci"
	I0912 22:00:53.998253  106287 command_runner.go:130] > runtime_root = "/run/runc"
	I0912 22:00:53.998258  106287 command_runner.go:130] > runtime_config_path = ""
	I0912 22:00:53.998264  106287 command_runner.go:130] > monitor_path = ""
	I0912 22:00:53.998268  106287 command_runner.go:130] > monitor_cgroup = ""
	I0912 22:00:53.998274  106287 command_runner.go:130] > monitor_exec_cgroup = ""
	I0912 22:00:53.998297  106287 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0912 22:00:53.998303  106287 command_runner.go:130] > # running containers
	I0912 22:00:53.998308  106287 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0912 22:00:53.998316  106287 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0912 22:00:53.998326  106287 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0912 22:00:53.998334  106287 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0912 22:00:53.998342  106287 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0912 22:00:53.998347  106287 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0912 22:00:53.998353  106287 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0912 22:00:53.998358  106287 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0912 22:00:53.998365  106287 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0912 22:00:53.998369  106287 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0912 22:00:53.998378  106287 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0912 22:00:53.998385  106287 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0912 22:00:53.998394  106287 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0912 22:00:53.998402  106287 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0912 22:00:53.998409  106287 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0912 22:00:53.998417  106287 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0912 22:00:53.998429  106287 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0912 22:00:53.998439  106287 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0912 22:00:53.998447  106287 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0912 22:00:53.998456  106287 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0912 22:00:53.998462  106287 command_runner.go:130] > # Example:
	I0912 22:00:53.998467  106287 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0912 22:00:53.998474  106287 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0912 22:00:53.998479  106287 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0912 22:00:53.998486  106287 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0912 22:00:53.998490  106287 command_runner.go:130] > # cpuset = 0
	I0912 22:00:53.998497  106287 command_runner.go:130] > # cpushares = "0-1"
	I0912 22:00:53.998501  106287 command_runner.go:130] > # Where:
	I0912 22:00:53.998508  106287 command_runner.go:130] > # The workload name is workload-type.
	I0912 22:00:53.998515  106287 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0912 22:00:53.998523  106287 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0912 22:00:53.998528  106287 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0912 22:00:53.998538  106287 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0912 22:00:53.998546  106287 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0912 22:00:53.998551  106287 command_runner.go:130] > # 
	I0912 22:00:53.998560  106287 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0912 22:00:53.998566  106287 command_runner.go:130] > #
	I0912 22:00:53.998572  106287 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0912 22:00:53.998580  106287 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0912 22:00:53.998587  106287 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0912 22:00:53.998595  106287 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0912 22:00:53.998601  106287 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0912 22:00:53.998607  106287 command_runner.go:130] > [crio.image]
	I0912 22:00:53.998614  106287 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0912 22:00:53.998620  106287 command_runner.go:130] > # default_transport = "docker://"
	I0912 22:00:53.998626  106287 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0912 22:00:53.998635  106287 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0912 22:00:53.998641  106287 command_runner.go:130] > # global_auth_file = ""
	I0912 22:00:53.998646  106287 command_runner.go:130] > # The image used to instantiate infra containers.
	I0912 22:00:53.998653  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:00:53.998664  106287 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0912 22:00:53.998672  106287 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0912 22:00:53.998679  106287 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0912 22:00:53.998686  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:00:53.998693  106287 command_runner.go:130] > # pause_image_auth_file = ""
	I0912 22:00:53.998702  106287 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0912 22:00:53.998711  106287 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0912 22:00:53.998719  106287 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0912 22:00:53.998727  106287 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0912 22:00:53.998734  106287 command_runner.go:130] > # pause_command = "/pause"
	I0912 22:00:53.998740  106287 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0912 22:00:53.998749  106287 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0912 22:00:53.998757  106287 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0912 22:00:53.998765  106287 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0912 22:00:53.998770  106287 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0912 22:00:53.998777  106287 command_runner.go:130] > # signature_policy = ""
	I0912 22:00:53.998787  106287 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0912 22:00:53.998795  106287 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0912 22:00:53.998802  106287 command_runner.go:130] > # changing them here.
	I0912 22:00:53.998807  106287 command_runner.go:130] > # insecure_registries = [
	I0912 22:00:53.998820  106287 command_runner.go:130] > # ]
	I0912 22:00:53.998829  106287 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0912 22:00:53.998837  106287 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0912 22:00:53.998844  106287 command_runner.go:130] > # image_volumes = "mkdir"
	I0912 22:00:53.998849  106287 command_runner.go:130] > # Temporary directory to use for storing big files
	I0912 22:00:53.998856  106287 command_runner.go:130] > # big_files_temporary_dir = ""
	I0912 22:00:53.998862  106287 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0912 22:00:53.998869  106287 command_runner.go:130] > # CNI plugins.
	I0912 22:00:53.998873  106287 command_runner.go:130] > [crio.network]
	I0912 22:00:53.998881  106287 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0912 22:00:53.998889  106287 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0912 22:00:53.998894  106287 command_runner.go:130] > # cni_default_network = ""
	I0912 22:00:53.998900  106287 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0912 22:00:53.998906  106287 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0912 22:00:53.998912  106287 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0912 22:00:53.998918  106287 command_runner.go:130] > # plugin_dirs = [
	I0912 22:00:53.998922  106287 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0912 22:00:53.998927  106287 command_runner.go:130] > # ]
	I0912 22:00:53.998934  106287 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0912 22:00:53.998941  106287 command_runner.go:130] > [crio.metrics]
	I0912 22:00:53.998946  106287 command_runner.go:130] > # Globally enable or disable metrics support.
	I0912 22:00:53.998954  106287 command_runner.go:130] > # enable_metrics = false
	I0912 22:00:53.998959  106287 command_runner.go:130] > # Specify enabled metrics collectors.
	I0912 22:00:53.998966  106287 command_runner.go:130] > # Per default all metrics are enabled.
	I0912 22:00:53.998973  106287 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0912 22:00:53.998981  106287 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0912 22:00:53.998989  106287 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0912 22:00:53.998995  106287 command_runner.go:130] > # metrics_collectors = [
	I0912 22:00:53.998999  106287 command_runner.go:130] > # 	"operations",
	I0912 22:00:53.999007  106287 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0912 22:00:53.999011  106287 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0912 22:00:53.999018  106287 command_runner.go:130] > # 	"operations_errors",
	I0912 22:00:53.999022  106287 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0912 22:00:53.999028  106287 command_runner.go:130] > # 	"image_pulls_by_name",
	I0912 22:00:53.999033  106287 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0912 22:00:53.999042  106287 command_runner.go:130] > # 	"image_pulls_failures",
	I0912 22:00:53.999050  106287 command_runner.go:130] > # 	"image_pulls_successes",
	I0912 22:00:53.999057  106287 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0912 22:00:53.999061  106287 command_runner.go:130] > # 	"image_layer_reuse",
	I0912 22:00:53.999068  106287 command_runner.go:130] > # 	"containers_oom_total",
	I0912 22:00:53.999072  106287 command_runner.go:130] > # 	"containers_oom",
	I0912 22:00:53.999078  106287 command_runner.go:130] > # 	"processes_defunct",
	I0912 22:00:53.999083  106287 command_runner.go:130] > # 	"operations_total",
	I0912 22:00:53.999089  106287 command_runner.go:130] > # 	"operations_latency_seconds",
	I0912 22:00:53.999094  106287 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0912 22:00:53.999101  106287 command_runner.go:130] > # 	"operations_errors_total",
	I0912 22:00:53.999106  106287 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0912 22:00:53.999110  106287 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0912 22:00:53.999117  106287 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0912 22:00:53.999122  106287 command_runner.go:130] > # 	"image_pulls_success_total",
	I0912 22:00:53.999128  106287 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0912 22:00:53.999133  106287 command_runner.go:130] > # 	"containers_oom_count_total",
	I0912 22:00:53.999139  106287 command_runner.go:130] > # ]
	I0912 22:00:53.999144  106287 command_runner.go:130] > # The port on which the metrics server will listen.
	I0912 22:00:53.999151  106287 command_runner.go:130] > # metrics_port = 9090
	I0912 22:00:53.999157  106287 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0912 22:00:53.999163  106287 command_runner.go:130] > # metrics_socket = ""
	I0912 22:00:53.999168  106287 command_runner.go:130] > # The certificate for the secure metrics server.
	I0912 22:00:53.999176  106287 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0912 22:00:53.999185  106287 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0912 22:00:53.999191  106287 command_runner.go:130] > # certificate on any modification event.
	I0912 22:00:53.999196  106287 command_runner.go:130] > # metrics_cert = ""
	I0912 22:00:53.999206  106287 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0912 22:00:53.999213  106287 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0912 22:00:53.999221  106287 command_runner.go:130] > # metrics_key = ""
	I0912 22:00:53.999226  106287 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0912 22:00:53.999233  106287 command_runner.go:130] > [crio.tracing]
	I0912 22:00:53.999238  106287 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0912 22:00:53.999245  106287 command_runner.go:130] > # enable_tracing = false
	I0912 22:00:53.999250  106287 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0912 22:00:53.999257  106287 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0912 22:00:53.999262  106287 command_runner.go:130] > # Number of samples to collect per million spans.
	I0912 22:00:53.999269  106287 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0912 22:00:53.999275  106287 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0912 22:00:53.999281  106287 command_runner.go:130] > [crio.stats]
	I0912 22:00:53.999286  106287 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0912 22:00:53.999294  106287 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0912 22:00:53.999298  106287 command_runner.go:130] > # stats_collection_period = 0
	I0912 22:00:53.999362  106287 cni.go:84] Creating CNI manager for ""
	I0912 22:00:53.999372  106287 cni.go:136] 1 nodes found, recommending kindnet
	I0912 22:00:53.999387  106287 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 22:00:53.999407  106287 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-947523 NodeName:multinode-947523 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:00:53.999541  106287 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-947523"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:00:53.999609  106287 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-947523 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-947523 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 22:00:53.999652  106287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 22:00:54.007706  106287 command_runner.go:130] > kubeadm
	I0912 22:00:54.007723  106287 command_runner.go:130] > kubectl
	I0912 22:00:54.007727  106287 command_runner.go:130] > kubelet
	I0912 22:00:54.007739  106287 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:00:54.007780  106287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:00:54.015129  106287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (426 bytes)
	I0912 22:00:54.030289  106287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:00:54.045680  106287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0912 22:00:54.060781  106287 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0912 22:00:54.063672  106287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:00:54.072734  106287 certs.go:56] Setting up /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523 for IP: 192.168.58.2
	I0912 22:00:54.072759  106287 certs.go:190] acquiring lock for shared ca certs: {Name:mk61327f1fa12512fba6a15661f030034d23bf2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:00:54.072886  106287 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key
	I0912 22:00:54.072921  106287 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key
	I0912 22:00:54.072960  106287 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.key
	I0912 22:00:54.072978  106287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.crt with IP's: []
	I0912 22:00:54.217198  106287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.crt ...
	I0912 22:00:54.217228  106287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.crt: {Name:mkc2b1c9920572a9aa2997b032ceb8175a218290 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:00:54.217391  106287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.key ...
	I0912 22:00:54.217402  106287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.key: {Name:mk411162d12a59a174e9ed0a2fb39aa95098609f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:00:54.217474  106287 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.key.cee25041
	I0912 22:00:54.217487  106287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0912 22:00:54.315173  106287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.crt.cee25041 ...
	I0912 22:00:54.315203  106287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.crt.cee25041: {Name:mkb38b8ac0a81fdf883f18391778a86577ddab4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:00:54.315372  106287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.key.cee25041 ...
	I0912 22:00:54.315386  106287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.key.cee25041: {Name:mk6af9e6115ed344af16b28ffc72316018b52840 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:00:54.315452  106287 certs.go:337] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.crt
	I0912 22:00:54.315514  106287 certs.go:341] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.key
	I0912 22:00:54.315560  106287 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.key
	I0912 22:00:54.315573  106287 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.crt with IP's: []
	I0912 22:00:54.498626  106287 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.crt ...
	I0912 22:00:54.498658  106287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.crt: {Name:mk1a91f1d6593ef5e18a3149cd8ed8c4ed1d4b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:00:54.498811  106287 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.key ...
	I0912 22:00:54.498822  106287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.key: {Name:mkf2b20edd391ba0010058aedb48c744ca980795 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:00:54.498886  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0912 22:00:54.498903  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0912 22:00:54.498912  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0912 22:00:54.498925  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0912 22:00:54.498937  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 22:00:54.498949  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 22:00:54.498962  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 22:00:54.498974  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 22:00:54.499024  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem (1338 bytes)
	W0912 22:00:54.499057  106287 certs.go:433] ignoring /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698_empty.pem, impossibly tiny 0 bytes
	I0912 22:00:54.499069  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 22:00:54.499092  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:00:54.499115  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:00:54.499137  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem (1679 bytes)
	I0912 22:00:54.499172  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:00:54.499199  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> /usr/share/ca-certificates/226982.pem
	I0912 22:00:54.499213  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:00:54.499273  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem -> /usr/share/ca-certificates/22698.pem
	I0912 22:00:54.499828  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 22:00:54.520540  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 22:00:54.540498  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:00:54.560560  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 22:00:54.581803  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:00:54.602154  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:00:54.622483  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:00:54.642918  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 22:00:54.663077  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /usr/share/ca-certificates/226982.pem (1708 bytes)
	I0912 22:00:54.683145  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:00:54.703277  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem --> /usr/share/ca-certificates/22698.pem (1338 bytes)
	I0912 22:00:54.723305  106287 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:00:54.738270  106287 ssh_runner.go:195] Run: openssl version
	I0912 22:00:54.742979  106287 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0912 22:00:54.743230  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/226982.pem && ln -fs /usr/share/ca-certificates/226982.pem /etc/ssl/certs/226982.pem"
	I0912 22:00:54.751362  106287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/226982.pem
	I0912 22:00:54.754318  106287 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 12 21:49 /usr/share/ca-certificates/226982.pem
	I0912 22:00:54.754337  106287 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:49 /usr/share/ca-certificates/226982.pem
	I0912 22:00:54.754365  106287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/226982.pem
	I0912 22:00:54.760147  106287 command_runner.go:130] > 3ec20f2e
	I0912 22:00:54.760337  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/226982.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:00:54.768349  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:00:54.776457  106287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:00:54.779420  106287 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:00:54.779463  106287 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:00:54.779517  106287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:00:54.785647  106287 command_runner.go:130] > b5213941
	I0912 22:00:54.785713  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:00:54.793822  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22698.pem && ln -fs /usr/share/ca-certificates/22698.pem /etc/ssl/certs/22698.pem"
	I0912 22:00:54.801638  106287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22698.pem
	I0912 22:00:54.804515  106287 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 12 21:49 /usr/share/ca-certificates/22698.pem
	I0912 22:00:54.804534  106287 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:49 /usr/share/ca-certificates/22698.pem
	I0912 22:00:54.804562  106287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22698.pem
	I0912 22:00:54.810257  106287 command_runner.go:130] > 51391683
	I0912 22:00:54.810438  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22698.pem /etc/ssl/certs/51391683.0"
	I0912 22:00:54.818342  106287 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 22:00:54.821136  106287 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 22:00:54.821167  106287 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 22:00:54.821211  106287 kubeadm.go:404] StartCluster: {Name:multinode-947523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-947523 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:00:54.821294  106287 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:00:54.821326  106287 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:00:54.853060  106287 cri.go:89] found id: ""
	I0912 22:00:54.853122  106287 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 22:00:54.860776  106287 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0912 22:00:54.860803  106287 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0912 22:00:54.860814  106287 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0912 22:00:54.860871  106287 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 22:00:54.868174  106287 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0912 22:00:54.868215  106287 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 22:00:54.875356  106287 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0912 22:00:54.875378  106287 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0912 22:00:54.875385  106287 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0912 22:00:54.875393  106287 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 22:00:54.875420  106287 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 22:00:54.875449  106287 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 22:00:54.917456  106287 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0912 22:00:54.917478  106287 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0912 22:00:54.917521  106287 kubeadm.go:322] [preflight] Running pre-flight checks
	I0912 22:00:54.917527  106287 command_runner.go:130] > [preflight] Running pre-flight checks
	I0912 22:00:54.951564  106287 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0912 22:00:54.951591  106287 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0912 22:00:54.951655  106287 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0912 22:00:54.951666  106287 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1041-gcp
	I0912 22:00:54.951699  106287 kubeadm.go:322] OS: Linux
	I0912 22:00:54.951708  106287 command_runner.go:130] > OS: Linux
	I0912 22:00:54.951762  106287 kubeadm.go:322] CGROUPS_CPU: enabled
	I0912 22:00:54.951773  106287 command_runner.go:130] > CGROUPS_CPU: enabled
	I0912 22:00:54.951842  106287 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0912 22:00:54.951854  106287 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0912 22:00:54.951909  106287 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0912 22:00:54.951920  106287 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0912 22:00:54.951981  106287 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0912 22:00:54.951992  106287 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0912 22:00:54.952063  106287 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0912 22:00:54.952074  106287 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0912 22:00:54.952144  106287 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0912 22:00:54.952184  106287 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0912 22:00:54.952262  106287 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0912 22:00:54.952275  106287 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0912 22:00:54.952354  106287 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0912 22:00:54.952360  106287 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0912 22:00:54.952430  106287 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0912 22:00:54.952440  106287 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0912 22:00:55.012381  106287 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 22:00:55.012403  106287 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 22:00:55.012552  106287 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 22:00:55.012575  106287 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 22:00:55.012719  106287 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 22:00:55.012734  106287 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 22:00:55.200265  106287 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 22:00:55.202326  106287 out.go:204]   - Generating certificates and keys ...
	I0912 22:00:55.200301  106287 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 22:00:55.202489  106287 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0912 22:00:55.202506  106287 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0912 22:00:55.202594  106287 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0912 22:00:55.202609  106287 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0912 22:00:55.526887  106287 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 22:00:55.526919  106287 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 22:00:55.588794  106287 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0912 22:00:55.588821  106287 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0912 22:00:55.653103  106287 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0912 22:00:55.653133  106287 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0912 22:00:55.752799  106287 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0912 22:00:55.752824  106287 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0912 22:00:56.077649  106287 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0912 22:00:56.077673  106287 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0912 22:00:56.077809  106287 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-947523] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0912 22:00:56.077836  106287 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-947523] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0912 22:00:56.448949  106287 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0912 22:00:56.448996  106287 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0912 22:00:56.449239  106287 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-947523] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0912 22:00:56.449263  106287 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-947523] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0912 22:00:56.662729  106287 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 22:00:56.662754  106287 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 22:00:56.797427  106287 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 22:00:56.797463  106287 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 22:00:56.869243  106287 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0912 22:00:56.869284  106287 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0912 22:00:56.869398  106287 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 22:00:56.869417  106287 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 22:00:57.061474  106287 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 22:00:57.061512  106287 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 22:00:57.186394  106287 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 22:00:57.186416  106287 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 22:00:57.321910  106287 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 22:00:57.321940  106287 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 22:00:57.413538  106287 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 22:00:57.413562  106287 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 22:00:57.414043  106287 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 22:00:57.414063  106287 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 22:00:57.416959  106287 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 22:00:57.420069  106287 out.go:204]   - Booting up control plane ...
	I0912 22:00:57.417031  106287 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 22:00:57.420172  106287 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 22:00:57.420215  106287 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 22:00:57.420328  106287 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 22:00:57.420339  106287 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 22:00:57.420408  106287 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 22:00:57.420423  106287 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 22:00:57.427842  106287 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:00:57.427870  106287 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:00:57.428675  106287 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:00:57.428694  106287 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:00:57.428760  106287 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0912 22:00:57.428780  106287 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0912 22:00:57.501988  106287 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 22:00:57.502017  106287 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 22:01:02.504094  106287 kubeadm.go:322] [apiclient] All control plane components are healthy after 5.002119 seconds
	I0912 22:01:02.504120  106287 command_runner.go:130] > [apiclient] All control plane components are healthy after 5.002119 seconds
	I0912 22:01:02.504230  106287 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 22:01:02.504242  106287 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0912 22:01:02.515380  106287 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 22:01:02.515405  106287 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0912 22:01:03.034416  106287 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0912 22:01:03.034427  106287 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0912 22:01:03.034687  106287 kubeadm.go:322] [mark-control-plane] Marking the node multinode-947523 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 22:01:03.034702  106287 command_runner.go:130] > [mark-control-plane] Marking the node multinode-947523 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0912 22:01:03.543843  106287 kubeadm.go:322] [bootstrap-token] Using token: nh7k0p.oi82gulw1fgjaqlg
	I0912 22:01:03.545489  106287 out.go:204]   - Configuring RBAC rules ...
	I0912 22:01:03.543934  106287 command_runner.go:130] > [bootstrap-token] Using token: nh7k0p.oi82gulw1fgjaqlg
	I0912 22:01:03.545661  106287 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 22:01:03.545678  106287 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0912 22:01:03.549740  106287 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 22:01:03.549763  106287 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0912 22:01:03.557368  106287 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 22:01:03.557391  106287 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0912 22:01:03.560107  106287 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 22:01:03.560129  106287 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0912 22:01:03.562860  106287 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 22:01:03.562882  106287 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0912 22:01:03.565721  106287 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 22:01:03.565740  106287 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0912 22:01:03.578071  106287 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 22:01:03.578092  106287 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0912 22:01:03.789899  106287 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0912 22:01:03.789935  106287 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0912 22:01:03.954213  106287 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0912 22:01:03.954242  106287 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0912 22:01:03.955060  106287 kubeadm.go:322] 
	I0912 22:01:03.955137  106287 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0912 22:01:03.955168  106287 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0912 22:01:03.955194  106287 kubeadm.go:322] 
	I0912 22:01:03.955301  106287 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0912 22:01:03.955311  106287 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0912 22:01:03.955321  106287 kubeadm.go:322] 
	I0912 22:01:03.955358  106287 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0912 22:01:03.955366  106287 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0912 22:01:03.955441  106287 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 22:01:03.955455  106287 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0912 22:01:03.955519  106287 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 22:01:03.955529  106287 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0912 22:01:03.955535  106287 kubeadm.go:322] 
	I0912 22:01:03.955598  106287 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0912 22:01:03.955611  106287 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0912 22:01:03.955617  106287 kubeadm.go:322] 
	I0912 22:01:03.955698  106287 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 22:01:03.955707  106287 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0912 22:01:03.955711  106287 kubeadm.go:322] 
	I0912 22:01:03.955776  106287 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0912 22:01:03.955786  106287 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0912 22:01:03.955877  106287 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 22:01:03.955889  106287 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0912 22:01:03.955987  106287 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 22:01:03.955997  106287 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0912 22:01:03.956003  106287 kubeadm.go:322] 
	I0912 22:01:03.956114  106287 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0912 22:01:03.956128  106287 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0912 22:01:03.956222  106287 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0912 22:01:03.956232  106287 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0912 22:01:03.956237  106287 kubeadm.go:322] 
	I0912 22:01:03.956344  106287 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token nh7k0p.oi82gulw1fgjaqlg \
	I0912 22:01:03.956359  106287 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token nh7k0p.oi82gulw1fgjaqlg \
	I0912 22:01:03.956516  106287 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 \
	I0912 22:01:03.956527  106287 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 \
	I0912 22:01:03.956550  106287 kubeadm.go:322] 	--control-plane 
	I0912 22:01:03.956556  106287 command_runner.go:130] > 	--control-plane 
	I0912 22:01:03.956562  106287 kubeadm.go:322] 
	I0912 22:01:03.956705  106287 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0912 22:01:03.956715  106287 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0912 22:01:03.956722  106287 kubeadm.go:322] 
	I0912 22:01:03.956821  106287 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token nh7k0p.oi82gulw1fgjaqlg \
	I0912 22:01:03.956832  106287 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token nh7k0p.oi82gulw1fgjaqlg \
	I0912 22:01:03.956968  106287 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 
	I0912 22:01:03.956979  106287 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 
	I0912 22:01:03.958587  106287 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0912 22:01:03.958619  106287 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0912 22:01:03.958781  106287 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 22:01:03.958795  106287 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 22:01:03.958822  106287 cni.go:84] Creating CNI manager for ""
	I0912 22:01:03.958842  106287 cni.go:136] 1 nodes found, recommending kindnet
	I0912 22:01:03.960654  106287 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 22:01:03.961974  106287 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 22:01:03.965520  106287 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0912 22:01:03.965541  106287 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0912 22:01:03.965551  106287 command_runner.go:130] > Device: 36h/54d	Inode: 555970      Links: 1
	I0912 22:01:03.965561  106287 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:01:03.965569  106287 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0912 22:01:03.965577  106287 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0912 22:01:03.965586  106287 command_runner.go:130] > Change: 2023-09-12 21:43:43.379872388 +0000
	I0912 22:01:03.965598  106287 command_runner.go:130] >  Birth: 2023-09-12 21:43:43.355870062 +0000
	I0912 22:01:03.965653  106287 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0912 22:01:03.965667  106287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 22:01:03.982124  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 22:01:04.644198  106287 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0912 22:01:04.649500  106287 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0912 22:01:04.655818  106287 command_runner.go:130] > serviceaccount/kindnet created
	I0912 22:01:04.666195  106287 command_runner.go:130] > daemonset.apps/kindnet created
	I0912 22:01:04.670501  106287 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 22:01:04.670589  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:04.670589  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1 minikube.k8s.io/name=multinode-947523 minikube.k8s.io/updated_at=2023_09_12T22_01_04_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:04.751753  106287 command_runner.go:130] > node/multinode-947523 labeled
	I0912 22:01:04.754381  106287 command_runner.go:130] > -16
	I0912 22:01:04.754412  106287 ops.go:34] apiserver oom_adj: -16
	I0912 22:01:04.754430  106287 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0912 22:01:04.754511  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:04.818730  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:04.818821  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:04.880737  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:05.384715  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:05.446385  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:05.884833  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:05.946150  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:06.384124  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:06.444125  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:06.884018  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:06.946951  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:07.384656  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:07.447944  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:07.884578  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:07.947779  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:08.384359  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:08.447988  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:08.884600  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:08.946976  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:09.384044  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:09.446196  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:09.884813  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:09.949572  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:10.384008  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:10.445208  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:10.884456  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:10.947385  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:11.383951  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:11.444498  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:11.884444  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:11.948403  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:12.383975  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:12.442852  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:12.883899  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:12.948017  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:13.384682  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:13.446056  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:13.884739  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:13.946897  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:14.384523  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:14.446196  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:14.883937  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:14.946213  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:15.384839  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:15.444548  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:15.884423  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:15.950393  106287 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0912 22:01:16.383938  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:01:16.453045  106287 command_runner.go:130] > NAME      SECRETS   AGE
	I0912 22:01:16.453068  106287 command_runner.go:130] > default   0         0s
	I0912 22:01:16.453100  106287 kubeadm.go:1081] duration metric: took 11.782579858s to wait for elevateKubeSystemPrivileges.
	I0912 22:01:16.453121  106287 kubeadm.go:406] StartCluster complete in 21.631920312s
	I0912 22:01:16.453145  106287 settings.go:142] acquiring lock: {Name:mk27d6c9e2209c1484da49df89f359f1b22a9261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:01:16.453223  106287 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:01:16.454148  106287 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/kubeconfig: {Name:mk41a52745552a5cecc3511e6da68b50fcd6941f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:01:16.454379  106287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 22:01:16.454511  106287 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0912 22:01:16.454609  106287 addons.go:69] Setting storage-provisioner=true in profile "multinode-947523"
	I0912 22:01:16.454616  106287 config.go:182] Loaded profile config "multinode-947523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:01:16.454634  106287 addons.go:231] Setting addon storage-provisioner=true in "multinode-947523"
	I0912 22:01:16.454673  106287 addons.go:69] Setting default-storageclass=true in profile "multinode-947523"
	I0912 22:01:16.454688  106287 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-947523"
	I0912 22:01:16.454759  106287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:01:16.455085  106287 cli_runner.go:164] Run: docker container inspect multinode-947523 --format={{.State.Status}}
	I0912 22:01:16.455113  106287 kapi.go:59] client config for multinode-947523: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 22:01:16.455312  106287 host.go:66] Checking if "multinode-947523" exists ...
	I0912 22:01:16.455819  106287 cli_runner.go:164] Run: docker container inspect multinode-947523 --format={{.State.Status}}
	I0912 22:01:16.455848  106287 cert_rotation.go:137] Starting client certificate rotation controller
	I0912 22:01:16.456145  106287 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0912 22:01:16.456194  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:16.456216  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:16.456236  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:16.467142  106287 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0912 22:01:16.467173  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:16.467184  106287 round_trippers.go:580]     Content-Length: 291
	I0912 22:01:16.467192  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:16 GMT
	I0912 22:01:16.467198  106287 round_trippers.go:580]     Audit-Id: 3ccaf493-30d5-479f-ab59-265b6bdd9534
	I0912 22:01:16.467203  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:16.467208  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:16.467213  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:16.467225  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:16.467249  106287 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2cf79ccf-2f5b-44ee-9635-433b3f2b66dd","resourceVersion":"321","creationTimestamp":"2023-09-12T22:01:03Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0912 22:01:16.467574  106287 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2cf79ccf-2f5b-44ee-9635-433b3f2b66dd","resourceVersion":"321","creationTimestamp":"2023-09-12T22:01:03Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0912 22:01:16.467673  106287 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0912 22:01:16.467683  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:16.467693  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:16.467701  106287 round_trippers.go:473]     Content-Type: application/json
	I0912 22:01:16.467710  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:16.474189  106287 round_trippers.go:574] Response Status: 409 Conflict in 6 milliseconds
	I0912 22:01:16.474214  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:16.474223  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:16.474233  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:16.474240  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:16.474251  106287 round_trippers.go:580]     Content-Length: 332
	I0912 22:01:16.474263  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:16 GMT
	I0912 22:01:16.474276  106287 round_trippers.go:580]     Audit-Id: a0728b49-2275-4b95-9c98-941572240239
	I0912 22:01:16.474289  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:16.474326  106287 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Operation cannot be fulfilled on deployments.apps \"coredns\": the object has been modified; please apply your changes to the latest version and try again","reason":"Conflict","details":{"name":"coredns","group":"apps","kind":"deployments"},"code":409}
	W0912 22:01:16.474563  106287 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "multinode-947523" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0912 22:01:16.474587  106287 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0912 22:01:16.474612  106287 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:01:16.477073  106287 out.go:177] * Verifying Kubernetes components...
	I0912 22:01:16.478210  106287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:01:16.478705  106287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:01:16.480174  106287 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:01:16.481339  106287 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 22:01:16.480862  106287 kapi.go:59] client config for multinode-947523: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 22:01:16.481358  106287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 22:01:16.481463  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:01:16.481717  106287 round_trippers.go:463] GET https://192.168.58.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0912 22:01:16.481734  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:16.481746  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:16.481755  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:16.486827  106287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0912 22:01:16.486845  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:16.486854  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:16.486862  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:16.486869  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:16.486876  106287 round_trippers.go:580]     Content-Length: 109
	I0912 22:01:16.486886  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:16 GMT
	I0912 22:01:16.486899  106287 round_trippers.go:580]     Audit-Id: 07a8bca8-0278-4647-b181-92ad65068df2
	I0912 22:01:16.486912  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:16.486938  106287 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"346"},"items":[]}
	I0912 22:01:16.487135  106287 addons.go:231] Setting addon default-storageclass=true in "multinode-947523"
	I0912 22:01:16.487180  106287 host.go:66] Checking if "multinode-947523" exists ...
	I0912 22:01:16.487492  106287 cli_runner.go:164] Run: docker container inspect multinode-947523 --format={{.State.Status}}
	I0912 22:01:16.498724  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa Username:docker}
	I0912 22:01:16.504721  106287 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 22:01:16.504742  106287 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 22:01:16.504798  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:01:16.520079  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa Username:docker}
	I0912 22:01:16.565439  106287 command_runner.go:130] > apiVersion: v1
	I0912 22:01:16.565460  106287 command_runner.go:130] > data:
	I0912 22:01:16.565464  106287 command_runner.go:130] >   Corefile: |
	I0912 22:01:16.565468  106287 command_runner.go:130] >     .:53 {
	I0912 22:01:16.565473  106287 command_runner.go:130] >         errors
	I0912 22:01:16.565477  106287 command_runner.go:130] >         health {
	I0912 22:01:16.565484  106287 command_runner.go:130] >            lameduck 5s
	I0912 22:01:16.565488  106287 command_runner.go:130] >         }
	I0912 22:01:16.565495  106287 command_runner.go:130] >         ready
	I0912 22:01:16.565505  106287 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0912 22:01:16.565518  106287 command_runner.go:130] >            pods insecure
	I0912 22:01:16.565527  106287 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0912 22:01:16.565542  106287 command_runner.go:130] >            ttl 30
	I0912 22:01:16.565549  106287 command_runner.go:130] >         }
	I0912 22:01:16.565559  106287 command_runner.go:130] >         prometheus :9153
	I0912 22:01:16.565569  106287 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0912 22:01:16.565576  106287 command_runner.go:130] >            max_concurrent 1000
	I0912 22:01:16.565580  106287 command_runner.go:130] >         }
	I0912 22:01:16.565586  106287 command_runner.go:130] >         cache 30
	I0912 22:01:16.565592  106287 command_runner.go:130] >         loop
	I0912 22:01:16.565602  106287 command_runner.go:130] >         reload
	I0912 22:01:16.565610  106287 command_runner.go:130] >         loadbalance
	I0912 22:01:16.565621  106287 command_runner.go:130] >     }
	I0912 22:01:16.565631  106287 command_runner.go:130] > kind: ConfigMap
	I0912 22:01:16.565643  106287 command_runner.go:130] > metadata:
	I0912 22:01:16.565656  106287 command_runner.go:130] >   creationTimestamp: "2023-09-12T22:01:03Z"
	I0912 22:01:16.565665  106287 command_runner.go:130] >   name: coredns
	I0912 22:01:16.565672  106287 command_runner.go:130] >   namespace: kube-system
	I0912 22:01:16.565678  106287 command_runner.go:130] >   resourceVersion: "236"
	I0912 22:01:16.565690  106287 command_runner.go:130] >   uid: bfedcbba-95aa-4f28-8bb3-bbce4e02da6a
	I0912 22:01:16.565855  106287 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 22:01:16.566067  106287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:01:16.566315  106287 kapi.go:59] client config for multinode-947523: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 22:01:16.566548  106287 node_ready.go:35] waiting up to 6m0s for node "multinode-947523" to be "Ready" ...
	I0912 22:01:16.566612  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:16.566620  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:16.566650  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:16.566662  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:16.568819  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:16.568842  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:16.568853  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:16.568863  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:16.568871  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:16.568885  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:16 GMT
	I0912 22:01:16.568893  106287 round_trippers.go:580]     Audit-Id: afad8bd4-70b8-4a2b-b51b-aafa4c1e92e3
	I0912 22:01:16.568904  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:16.569054  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"315","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:0
1:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I0912 22:01:16.569869  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:16.569889  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:16.569899  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:16.569908  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:16.571959  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:16.571976  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:16.571985  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:16.571992  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:16.571999  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:16 GMT
	I0912 22:01:16.572008  106287 round_trippers.go:580]     Audit-Id: 88dbde73-7127-436a-8bd9-d60d696721a9
	I0912 22:01:16.572018  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:16.572038  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:16.572154  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"315","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:0
1:00Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 6037 chars]
	I0912 22:01:16.640385  106287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 22:01:16.641025  106287 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 22:01:17.055297  106287 command_runner.go:130] > configmap/coredns replaced
	I0912 22:01:17.059572  106287 start.go:917] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS's ConfigMap
	I0912 22:01:17.072806  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:17.072827  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:17.072835  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:17.072841  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:17.123216  106287 round_trippers.go:574] Response Status: 200 OK in 50 milliseconds
	I0912 22:01:17.123242  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:17.123253  106287 round_trippers.go:580]     Audit-Id: 601f2e0f-96c9-4b83-b0ac-a40e70aa9d21
	I0912 22:01:17.123263  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:17.123272  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:17.123281  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:17.123290  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:17.123305  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:17 GMT
	I0912 22:01:17.123699  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"351","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0912 22:01:17.132370  106287 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0912 22:01:17.361136  106287 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0912 22:01:17.367171  106287 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0912 22:01:17.376137  106287 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0912 22:01:17.382146  106287 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0912 22:01:17.421069  106287 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0912 22:01:17.429836  106287 command_runner.go:130] > pod/storage-provisioner created
	I0912 22:01:17.437475  106287 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0912 22:01:17.438648  106287 addons.go:502] enable addons completed in 984.139105ms: enabled=[default-storageclass storage-provisioner]
	I0912 22:01:17.573245  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:17.573264  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:17.573273  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:17.573288  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:17.575849  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:17.575868  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:17.575875  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:17.575881  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:17.575887  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:17.575892  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:17.575899  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:17 GMT
	I0912 22:01:17.575907  106287 round_trippers.go:580]     Audit-Id: 76c68d5d-050b-4202-850e-1fbe03ab14f1
	I0912 22:01:17.576122  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"351","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0912 22:01:18.072675  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:18.072695  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:18.072706  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:18.072714  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:18.075429  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:18.075450  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:18.075460  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:18.075468  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:18.075477  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:18.075486  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:18.075502  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:18 GMT
	I0912 22:01:18.075514  106287 round_trippers.go:580]     Audit-Id: c2eb2758-5216-475f-bfc4-53061cce4963
	I0912 22:01:18.075660  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"351","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0912 22:01:18.573328  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:18.573347  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:18.573354  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:18.573361  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:18.575663  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:18.575683  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:18.575694  106287 round_trippers.go:580]     Audit-Id: 3c49a919-f312-492c-b0dd-e15662356856
	I0912 22:01:18.575702  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:18.575711  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:18.575719  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:18.575729  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:18.575741  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:18 GMT
	I0912 22:01:18.575887  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"351","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0912 22:01:18.576245  106287 node_ready.go:58] node "multinode-947523" has status "Ready":"False"
	I0912 22:01:19.073482  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:19.073502  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:19.073510  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:19.073516  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:19.075896  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:19.075915  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:19.075922  106287 round_trippers.go:580]     Audit-Id: 681d207d-2d43-4ba3-8b7c-e5d9e0505296
	I0912 22:01:19.075927  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:19.075932  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:19.075937  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:19.075942  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:19.075948  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:19 GMT
	I0912 22:01:19.076062  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"351","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6141 chars]
	I0912 22:01:19.573588  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:19.573613  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:19.573633  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:19.573640  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:19.575672  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:19.575694  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:19.575700  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:19.575706  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:19.575711  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:19.575717  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:19.575724  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:19 GMT
	I0912 22:01:19.575732  106287 round_trippers.go:580]     Audit-Id: 7765740f-b0b7-447e-bc56-d3a353ba7e51
	I0912 22:01:19.575860  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:19.576160  106287 node_ready.go:49] node "multinode-947523" has status "Ready":"True"
	I0912 22:01:19.576182  106287 node_ready.go:38] duration metric: took 3.009619822s waiting for node "multinode-947523" to be "Ready" ...
	I0912 22:01:19.576193  106287 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:01:19.576249  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0912 22:01:19.576256  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:19.576263  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:19.576271  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:19.579337  106287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 22:01:19.579354  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:19.579365  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:19.579371  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:19.579376  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:19 GMT
	I0912 22:01:19.579381  106287 round_trippers.go:580]     Audit-Id: d2872759-169e-4a30-adef-a08ccf5ed15d
	I0912 22:01:19.579390  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:19.579398  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:19.579901  106287 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"387"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6q54t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43187880-a314-47f3-b42a-608882b6043b","resourceVersion":"387","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62677 chars]
	I0912 22:01:19.583068  106287 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6q54t" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:19.583146  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6q54t
	I0912 22:01:19.583155  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:19.583163  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:19.583170  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:19.585050  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:19.585064  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:19.585071  106287 round_trippers.go:580]     Audit-Id: 4897f638-2918-4092-80f1-9943f3dcdab3
	I0912 22:01:19.585076  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:19.585081  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:19.585086  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:19.585092  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:19.585097  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:19 GMT
	I0912 22:01:19.585245  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6q54t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43187880-a314-47f3-b42a-608882b6043b","resourceVersion":"387","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0912 22:01:19.585661  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:19.585675  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:19.585682  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:19.585691  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:19.587359  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:19.587372  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:19.587378  106287 round_trippers.go:580]     Audit-Id: 14ce36d9-8108-4f16-8241-5f3f0d81f810
	I0912 22:01:19.587383  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:19.587388  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:19.587393  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:19.587398  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:19.587403  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:19 GMT
	I0912 22:01:19.587535  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:19.587848  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6q54t
	I0912 22:01:19.587860  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:19.587866  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:19.587872  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:19.589478  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:19.589493  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:19.589501  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:19.589510  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:19.589518  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:19.589526  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:19 GMT
	I0912 22:01:19.589538  106287 round_trippers.go:580]     Audit-Id: a355b298-1397-4f15-8bf6-9d3f884c7131
	I0912 22:01:19.589550  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:19.589650  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6q54t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43187880-a314-47f3-b42a-608882b6043b","resourceVersion":"387","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6150 chars]
	I0912 22:01:19.590122  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:19.590136  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:19.590146  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:19.590162  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:19.591826  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:19.591840  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:19.591847  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:19 GMT
	I0912 22:01:19.591853  106287 round_trippers.go:580]     Audit-Id: 77db74bf-02f1-40b1-9244-9830299040bc
	I0912 22:01:19.591861  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:19.591869  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:19.591880  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:19.591889  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:19.592059  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:20.092690  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6q54t
	I0912 22:01:20.092716  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:20.092728  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:20.092742  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:20.095010  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:20.095035  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:20.095045  106287 round_trippers.go:580]     Audit-Id: 7ea9cb52-2684-44fc-95b2-00a155e5b1f7
	I0912 22:01:20.095052  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:20.095060  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:20.095068  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:20.095075  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:20.095087  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:20 GMT
	I0912 22:01:20.095208  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6q54t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43187880-a314-47f3-b42a-608882b6043b","resourceVersion":"400","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0912 22:01:20.095785  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:20.095799  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:20.095810  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:20.095819  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:20.097812  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:20.097827  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:20.097833  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:20 GMT
	I0912 22:01:20.097839  106287 round_trippers.go:580]     Audit-Id: 7bc4236e-17c1-41f1-9a8e-dced7cbbad76
	I0912 22:01:20.097844  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:20.097859  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:20.097866  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:20.097874  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:20.098025  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:20.098312  106287 pod_ready.go:92] pod "coredns-5dd5756b68-6q54t" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:20.098327  106287 pod_ready.go:81] duration metric: took 515.236882ms waiting for pod "coredns-5dd5756b68-6q54t" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:20.098336  106287 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:20.098381  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:20.098389  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:20.098396  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:20.098402  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:20.100185  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:20.100200  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:20.100206  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:20.100212  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:20 GMT
	I0912 22:01:20.100217  106287 round_trippers.go:580]     Audit-Id: 84cdcb90-a263-4f45-a098-9e4762e09e94
	I0912 22:01:20.100222  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:20.100226  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:20.100232  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:20.100360  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:20.100888  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:20.100904  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:20.100915  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:20.100922  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:20.102666  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:20.102685  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:20.102691  106287 round_trippers.go:580]     Audit-Id: 648505e8-4f10-4986-b1d8-68f3d1405c11
	I0912 22:01:20.102696  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:20.102703  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:20.102711  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:20.102720  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:20.102736  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:20 GMT
	I0912 22:01:20.102861  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:20.103182  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:20.103192  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:20.103198  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:20.103204  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:20.104951  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:20.104967  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:20.104973  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:20 GMT
	I0912 22:01:20.104978  106287 round_trippers.go:580]     Audit-Id: 37f75ced-d4e5-407e-9727-365321fe9c9e
	I0912 22:01:20.104984  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:20.104992  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:20.104997  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:20.105004  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:20.105115  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:20.105488  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:20.105498  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:20.105505  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:20.105511  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:20.107234  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:20.107248  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:20.107256  106287 round_trippers.go:580]     Audit-Id: ad02ee41-3fb7-4da0-867c-76ecbbbb9c96
	I0912 22:01:20.107261  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:20.107266  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:20.107271  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:20.107282  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:20.107294  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:20 GMT
	I0912 22:01:20.107391  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:20.608461  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:20.608487  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:20.608495  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:20.608501  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:20.610861  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:20.610881  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:20.610888  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:20.610894  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:20 GMT
	I0912 22:01:20.610899  106287 round_trippers.go:580]     Audit-Id: b87379dc-18ab-4deb-a5a5-14246b5b6027
	I0912 22:01:20.610905  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:20.610910  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:20.610915  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:20.611087  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:20.611519  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:20.611531  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:20.611539  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:20.611546  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:20.613563  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:20.613586  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:20.613596  106287 round_trippers.go:580]     Audit-Id: ab40a1a0-118f-4cc4-84d3-69308a9dfcf8
	I0912 22:01:20.613604  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:20.613613  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:20.613625  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:20.613636  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:20.613648  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:20 GMT
	I0912 22:01:20.613758  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:21.108487  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:21.108517  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:21.108525  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:21.108531  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:21.110951  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:21.110970  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:21.110978  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:21.110983  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:21.110988  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:21.110993  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:21 GMT
	I0912 22:01:21.110998  106287 round_trippers.go:580]     Audit-Id: 21f72cf3-d33b-472b-8689-72bd9a3e705b
	I0912 22:01:21.111004  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:21.111168  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:21.111626  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:21.111639  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:21.111647  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:21.111653  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:21.113876  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:21.113894  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:21.113901  106287 round_trippers.go:580]     Audit-Id: dde55f9b-4862-4caa-8d00-bfeaa13353dc
	I0912 22:01:21.113907  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:21.113915  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:21.113923  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:21.113934  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:21.113942  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:21 GMT
	I0912 22:01:21.114093  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:21.608833  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:21.608858  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:21.608866  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:21.608872  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:21.611337  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:21.611363  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:21.611373  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:21.611384  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:21.611392  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:21 GMT
	I0912 22:01:21.611404  106287 round_trippers.go:580]     Audit-Id: 90b7640a-74c8-4003-9a35-4904897f2759
	I0912 22:01:21.611412  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:21.611424  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:21.611548  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:21.611990  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:21.612002  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:21.612009  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:21.612015  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:21.613935  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:21.613953  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:21.613959  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:21.613965  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:21.613970  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:21.613975  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:21.613980  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:21 GMT
	I0912 22:01:21.613985  106287 round_trippers.go:580]     Audit-Id: 33cd626d-6077-4d91-80af-dca4e64d5628
	I0912 22:01:21.614138  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:22.108862  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:22.108885  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:22.108894  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:22.108900  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:22.111172  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:22.111190  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:22.111197  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:22.111203  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:22.111211  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:22.111219  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:22.111226  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:22 GMT
	I0912 22:01:22.111234  106287 round_trippers.go:580]     Audit-Id: 71cbebd0-2bb3-4773-bef7-0101c169df07
	I0912 22:01:22.111423  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:22.111886  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:22.111898  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:22.111905  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:22.111911  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:22.113859  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:22.113874  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:22.113881  106287 round_trippers.go:580]     Audit-Id: 0bc9d372-5efc-48d7-98a0-6df7b564b715
	I0912 22:01:22.113886  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:22.113891  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:22.113896  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:22.113901  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:22.113906  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:22 GMT
	I0912 22:01:22.114043  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:22.114351  106287 pod_ready.go:102] pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace has status "Ready":"False"
	I0912 22:01:22.608700  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:22.608723  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:22.608730  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:22.608737  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:22.611033  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:22.611061  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:22.611072  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:22.611082  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:22.611088  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:22 GMT
	I0912 22:01:22.611095  106287 round_trippers.go:580]     Audit-Id: 6c623ae5-7803-4d00-b558-f0ca2b58b352
	I0912 22:01:22.611105  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:22.611114  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:22.611257  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:22.611797  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:22.611813  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:22.611820  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:22.611828  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:22.613677  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:22.613699  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:22.613709  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:22.613717  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:22 GMT
	I0912 22:01:22.613726  106287 round_trippers.go:580]     Audit-Id: c0031c1c-7933-4b78-9963-d2bdfa572bed
	I0912 22:01:22.613738  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:22.613750  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:22.613762  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:22.613875  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:23.108526  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:23.108553  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:23.108561  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:23.108577  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:23.111043  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:23.111065  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:23.111076  106287 round_trippers.go:580]     Audit-Id: f61bef23-63c4-4a35-a576-e0b0981ae1d9
	I0912 22:01:23.111084  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:23.111092  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:23.111100  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:23.111109  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:23.111121  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:23 GMT
	I0912 22:01:23.111267  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:23.111742  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:23.111758  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:23.111769  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:23.111777  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:23.113843  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:23.113865  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:23.113875  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:23 GMT
	I0912 22:01:23.113884  106287 round_trippers.go:580]     Audit-Id: c10e57c7-ed46-4a37-a355-0157ffa82e95
	I0912 22:01:23.113891  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:23.113896  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:23.113904  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:23.113909  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:23.114019  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:23.608734  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:23.608760  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:23.608773  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:23.608781  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:23.611139  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:23.611156  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:23.611163  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:23.611168  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:23.611173  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:23.611179  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:23.611184  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:23 GMT
	I0912 22:01:23.611189  106287 round_trippers.go:580]     Audit-Id: b8abb064-e588-424e-9a78-12dd346a3542
	I0912 22:01:23.611368  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:23.611848  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:23.611861  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:23.611868  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:23.611874  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:23.613741  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:23.613760  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:23.613766  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:23.613774  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:23 GMT
	I0912 22:01:23.613782  106287 round_trippers.go:580]     Audit-Id: 609643e1-8f8d-4938-81fc-b6b6a772c099
	I0912 22:01:23.613789  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:23.613796  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:23.613809  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:23.613914  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:24.108621  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:24.108642  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:24.108650  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:24.108656  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:24.111033  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:24.111050  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:24.111057  106287 round_trippers.go:580]     Audit-Id: aaac64ea-1be5-4c0b-a4db-2e9415e3604e
	I0912 22:01:24.111070  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:24.111075  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:24.111080  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:24.111088  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:24.111096  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:24 GMT
	I0912 22:01:24.111243  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:24.111710  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:24.111726  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:24.111733  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:24.111742  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:24.113568  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:24.113592  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:24.113602  106287 round_trippers.go:580]     Audit-Id: 73646d83-63d2-4471-bd8a-92d0f1369713
	I0912 22:01:24.113613  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:24.113622  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:24.113633  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:24.113642  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:24.113653  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:24 GMT
	I0912 22:01:24.113778  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:24.608436  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:24.608459  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:24.608467  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:24.608473  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:24.610951  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:24.610977  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:24.610987  106287 round_trippers.go:580]     Audit-Id: 037c2c1e-ca01-4c12-bb80-ebb38ebc2d8e
	I0912 22:01:24.610995  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:24.611000  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:24.611006  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:24.611011  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:24.611016  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:24 GMT
	I0912 22:01:24.611125  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:24.611584  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:24.611598  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:24.611605  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:24.611611  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:24.613616  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:24.613650  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:24.613660  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:24.613668  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:24.613677  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:24.613692  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:24 GMT
	I0912 22:01:24.613703  106287 round_trippers.go:580]     Audit-Id: 064db15e-98a7-4248-aee5-02a827dae0c9
	I0912 22:01:24.613715  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:24.613812  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:24.614111  106287 pod_ready.go:102] pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace has status "Ready":"False"
	I0912 22:01:25.108736  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:25.108758  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:25.108766  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:25.108772  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:25.111038  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:25.111063  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:25.111074  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:25.111082  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:25.111091  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:25.111099  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:25.111108  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:25 GMT
	I0912 22:01:25.111121  106287 round_trippers.go:580]     Audit-Id: aacea7cd-edc7-425d-8a7f-4a88ebe14b26
	I0912 22:01:25.111256  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:25.111709  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:25.111722  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:25.111729  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:25.111735  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:25.113578  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:25.113595  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:25.113606  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:25 GMT
	I0912 22:01:25.113614  106287 round_trippers.go:580]     Audit-Id: e0d2b9b4-8a68-4423-a299-1d83f7b698ff
	I0912 22:01:25.113623  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:25.113631  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:25.113644  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:25.113661  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:25.113800  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:25.608312  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:25.608335  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:25.608343  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:25.608351  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:25.610622  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:25.610639  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:25.610646  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:25.610651  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:25 GMT
	I0912 22:01:25.610656  106287 round_trippers.go:580]     Audit-Id: 8f491d5b-024d-4d1b-8c95-9cc4235c5953
	I0912 22:01:25.610662  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:25.610667  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:25.610672  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:25.610842  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:25.611321  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:25.611334  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:25.611342  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:25.611348  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:25.613250  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:25.613265  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:25.613272  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:25.613277  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:25.613282  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:25.613290  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:25 GMT
	I0912 22:01:25.613298  106287 round_trippers.go:580]     Audit-Id: 472f61d5-2448-422a-8c72-07d0c2293d51
	I0912 22:01:25.613310  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:25.613475  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:26.107922  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:26.107945  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:26.107953  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:26.107959  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:26.110101  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:26.110122  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:26.110132  106287 round_trippers.go:580]     Audit-Id: 3958d5d0-f7eb-4315-9e77-ec297124a6a9
	I0912 22:01:26.110141  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:26.110148  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:26.110155  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:26.110163  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:26.110178  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:26 GMT
	I0912 22:01:26.110337  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:26.110772  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:26.110782  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:26.110789  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:26.110794  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:26.112657  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:26.112682  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:26.112692  106287 round_trippers.go:580]     Audit-Id: edd37e4a-f7bb-4f44-a93c-cd5d12442301
	I0912 22:01:26.112699  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:26.112707  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:26.112715  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:26.112728  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:26.112738  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:26 GMT
	I0912 22:01:26.112847  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:26.608739  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:26.608764  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:26.608772  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:26.608778  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:26.611026  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:26.611126  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:26.611150  106287 round_trippers.go:580]     Audit-Id: 795b7b26-05ff-44d1-8cce-b105c1c8b35f
	I0912 22:01:26.611160  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:26.611169  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:26.611182  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:26.611194  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:26.611204  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:26 GMT
	I0912 22:01:26.611325  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:26.611784  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:26.611799  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:26.611806  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:26.611812  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:26.613668  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:26.613684  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:26.613690  106287 round_trippers.go:580]     Audit-Id: c936aaf6-59ae-4b41-b7b2-2c67f69e5e32
	I0912 22:01:26.613696  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:26.613701  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:26.613709  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:26.613714  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:26.613724  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:26 GMT
	I0912 22:01:26.613872  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:26.614204  106287 pod_ready.go:102] pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace has status "Ready":"False"
	I0912 22:01:27.108530  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:27.108557  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:27.108566  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:27.108572  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:27.110854  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:27.110874  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:27.110880  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:27 GMT
	I0912 22:01:27.110885  106287 round_trippers.go:580]     Audit-Id: 14644045-c788-4b9d-8320-21929a805417
	I0912 22:01:27.110890  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:27.110896  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:27.110901  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:27.110912  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:27.111103  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:27.111555  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:27.111570  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:27.111577  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:27.111583  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:27.113645  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:27.113670  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:27.113677  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:27 GMT
	I0912 22:01:27.113682  106287 round_trippers.go:580]     Audit-Id: d7746b7b-1e66-4917-9b6d-1157e61b6751
	I0912 22:01:27.113687  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:27.113692  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:27.113697  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:27.113705  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:27.113821  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:27.608487  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:27.608511  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:27.608522  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:27.608531  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:27.610835  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:27.610854  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:27.610861  106287 round_trippers.go:580]     Audit-Id: 723d132a-7441-4f6d-b950-09f7431f6917
	I0912 22:01:27.610867  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:27.610872  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:27.610877  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:27.610884  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:27.610892  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:27 GMT
	I0912 22:01:27.611097  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:27.611531  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:27.611543  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:27.611550  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:27.611555  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:27.613301  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:27.613318  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:27.613327  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:27 GMT
	I0912 22:01:27.613335  106287 round_trippers.go:580]     Audit-Id: 39aa979c-86ef-42a5-95b4-014189389c2e
	I0912 22:01:27.613342  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:27.613349  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:27.613357  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:27.613366  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:27.613506  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:28.108089  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:28.108111  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:28.108119  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:28.108125  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:28.112207  106287 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0912 22:01:28.112231  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:28.112240  106287 round_trippers.go:580]     Audit-Id: 75615578-3f09-4972-9681-45f18318dbca
	I0912 22:01:28.112249  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:28.112257  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:28.112265  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:28.112277  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:28.112288  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:28 GMT
	I0912 22:01:28.112419  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:28.112920  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:28.112935  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:28.112947  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:28.112956  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:28.114883  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:28.114899  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:28.114906  106287 round_trippers.go:580]     Audit-Id: 696df605-b6d7-4fb3-b9a7-d51bdc61d170
	I0912 22:01:28.114911  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:28.114916  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:28.114921  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:28.114926  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:28.114932  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:28 GMT
	I0912 22:01:28.115079  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:28.608774  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:28.608797  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:28.608805  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:28.608811  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:28.611140  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:28.611159  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:28.611167  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:28.611172  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:28.611185  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:28 GMT
	I0912 22:01:28.611193  106287 round_trippers.go:580]     Audit-Id: f7b22d7f-fba5-4554-8fad-91969ed35af4
	I0912 22:01:28.611198  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:28.611205  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:28.611344  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:28.611828  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:28.611841  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:28.611849  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:28.611857  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:28.613923  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:28.613938  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:28.613945  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:28.613950  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:28 GMT
	I0912 22:01:28.613955  106287 round_trippers.go:580]     Audit-Id: dbbc8f5c-3573-4224-b513-e625b0195543
	I0912 22:01:28.613960  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:28.613965  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:28.613973  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:28.614103  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:28.614433  106287 pod_ready.go:102] pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace has status "Ready":"False"
	I0912 22:01:29.108737  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:29.108761  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:29.108770  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:29.108776  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:29.111299  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:29.111317  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:29.111326  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:29.111334  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:29.111342  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:29 GMT
	I0912 22:01:29.111351  106287 round_trippers.go:580]     Audit-Id: 2eeaec76-0319-49e3-951e-254d93706d3d
	I0912 22:01:29.111360  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:29.111367  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:29.111500  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:29.111952  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:29.111964  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:29.111971  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:29.111977  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:29.113852  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:29.113871  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:29.113880  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:29.113886  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:29.113891  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:29 GMT
	I0912 22:01:29.113896  106287 round_trippers.go:580]     Audit-Id: 07520594-becc-4383-a9e7-194c0d75f3a6
	I0912 22:01:29.113904  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:29.113909  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:29.113996  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:29.608783  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:29.608811  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:29.608820  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:29.608829  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:29.611690  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:29.611714  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:29.611725  106287 round_trippers.go:580]     Audit-Id: 079f2329-0708-41b0-a68f-45290f5826e7
	I0912 22:01:29.611733  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:29.611741  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:29.611806  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:29.611821  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:29.611832  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:29 GMT
	I0912 22:01:29.611966  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"404","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6492 chars]
	I0912 22:01:29.612518  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:29.612537  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:29.612544  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:29.612550  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:29.614569  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:29.614591  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:29.614600  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:29 GMT
	I0912 22:01:29.614608  106287 round_trippers.go:580]     Audit-Id: b9ee791b-1b61-467c-8c03-111fb16118b1
	I0912 22:01:29.614618  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:29.614625  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:29.614632  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:29.614643  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:29.614827  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:30.108673  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:30.108695  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.108704  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.108710  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.111011  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:30.111036  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.111046  106287 round_trippers.go:580]     Audit-Id: 3ac59718-db2f-4179-8f07-d726e454c155
	I0912 22:01:30.111056  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.111062  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.111068  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.111073  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.111080  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.111212  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"415","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0912 22:01:30.111651  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:30.111665  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.111672  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.111677  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.113518  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:30.113536  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.113545  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.113553  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.113561  106287 round_trippers.go:580]     Audit-Id: 4e8c0c5a-aa5f-439b-8062-aa7ea9331a0a
	I0912 22:01:30.113570  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.113579  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.113592  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.113702  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:30.113987  106287 pod_ready.go:92] pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:30.114003  106287 pod_ready.go:81] duration metric: took 10.01565927s waiting for pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.114011  106287 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.114058  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-947523
	I0912 22:01:30.114065  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.114072  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.114078  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.115790  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:30.115804  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.115810  106287 round_trippers.go:580]     Audit-Id: 539d2244-6da9-4384-9a03-c17d033afcbc
	I0912 22:01:30.115816  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.115821  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.115826  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.115831  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.115836  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.115994  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-947523","namespace":"kube-system","uid":"f4d30e28-adde-4a67-9b29-0029ad5d3239","resourceVersion":"328","creationTimestamp":"2023-09-12T22:01:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"86a8a258e4a4aba5f7a124f3591cc4df","kubernetes.io/config.mirror":"86a8a258e4a4aba5f7a124f3591cc4df","kubernetes.io/config.seen":"2023-09-12T22:01:03.833973700Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0912 22:01:30.116377  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:30.116390  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.116397  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.116405  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.118476  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:30.118494  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.118504  106287 round_trippers.go:580]     Audit-Id: 1c111b1b-4ff7-40e3-b9c1-f45223a32d60
	I0912 22:01:30.118512  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.118522  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.118535  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.118547  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.118558  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.118664  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:30.118930  106287 pod_ready.go:92] pod "etcd-multinode-947523" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:30.118943  106287 pod_ready.go:81] duration metric: took 4.925223ms waiting for pod "etcd-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.118953  106287 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.118995  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-947523
	I0912 22:01:30.119003  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.119009  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.119015  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.120619  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:30.120634  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.120641  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.120646  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.120651  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.120664  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.120672  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.120677  106287 round_trippers.go:580]     Audit-Id: 9ee5b258-138c-4040-88d6-3e1476d648f1
	I0912 22:01:30.120845  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-947523","namespace":"kube-system","uid":"06229ad2-51aa-408c-9fba-049fdaa4cf47","resourceVersion":"285","creationTimestamp":"2023-09-12T22:01:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"0d9718bc3e967e6627626ff1f6f24854","kubernetes.io/config.mirror":"0d9718bc3e967e6627626ff1f6f24854","kubernetes.io/config.seen":"2023-09-12T22:00:57.710584734Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0912 22:01:30.121201  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:30.121212  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.121219  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.121225  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.122719  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:30.122739  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.122747  106287 round_trippers.go:580]     Audit-Id: 94f71346-7965-4231-9602-3d0dbf2d66bb
	I0912 22:01:30.122756  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.122764  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.122778  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.122789  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.122801  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.122915  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:30.123215  106287 pod_ready.go:92] pod "kube-apiserver-multinode-947523" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:30.123228  106287 pod_ready.go:81] duration metric: took 4.269728ms waiting for pod "kube-apiserver-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.123236  106287 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.123292  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-947523
	I0912 22:01:30.123302  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.123308  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.123314  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.124898  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:30.124912  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.124919  106287 round_trippers.go:580]     Audit-Id: 50a02da5-d320-4f4e-9b76-a6e93ff7ca4c
	I0912 22:01:30.124924  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.124929  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.124934  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.124939  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.124945  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.125087  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-947523","namespace":"kube-system","uid":"342d1648-c610-467f-91d4-f47bb5c83634","resourceVersion":"288","creationTimestamp":"2023-09-12T22:01:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee87f8fd4f4e6399c1c60570a26046b4","kubernetes.io/config.mirror":"ee87f8fd4f4e6399c1c60570a26046b4","kubernetes.io/config.seen":"2023-09-12T22:01:03.833980405Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0912 22:01:30.125578  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:30.125594  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.125604  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.125612  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.127261  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:30.127279  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.127289  106287 round_trippers.go:580]     Audit-Id: 2bc43fb8-1fda-4b69-b51e-0f990e42aec7
	I0912 22:01:30.127296  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.127305  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.127316  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.127327  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.127338  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.127457  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:30.127727  106287 pod_ready.go:92] pod "kube-controller-manager-multinode-947523" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:30.127740  106287 pod_ready.go:81] duration metric: took 4.498474ms waiting for pod "kube-controller-manager-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.127748  106287 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p2j8w" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.127794  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2j8w
	I0912 22:01:30.127802  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.127808  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.127814  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.129469  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:30.129488  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.129498  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.129508  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.129516  106287 round_trippers.go:580]     Audit-Id: 97f5ae68-cceb-4607-aa6f-458bd11a1234
	I0912 22:01:30.129527  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.129538  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.129549  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.129654  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-p2j8w","generateName":"kube-proxy-","namespace":"kube-system","uid":"cc0d0912-c416-4d26-9520-8e414702468f","resourceVersion":"369","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d3a4320-e6cf-4430-9f20-cd5151fa4503","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d3a4320-e6cf-4430-9f20-cd5151fa4503\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0912 22:01:30.130119  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:30.130134  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.130145  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.130160  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.131960  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:30.131976  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.131981  106287 round_trippers.go:580]     Audit-Id: 7bbc0eb7-ac0d-44c6-9253-e3783fc91e94
	I0912 22:01:30.131988  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.131996  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.132005  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.132016  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.132024  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.132113  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:30.132385  106287 pod_ready.go:92] pod "kube-proxy-p2j8w" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:30.132396  106287 pod_ready.go:81] duration metric: took 4.64301ms waiting for pod "kube-proxy-p2j8w" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.132404  106287 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.308687  106287 request.go:629] Waited for 176.224853ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-947523
	I0912 22:01:30.308756  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-947523
	I0912 22:01:30.308764  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.308772  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.308780  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.310960  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:30.310984  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.310991  106287 round_trippers.go:580]     Audit-Id: e49ee1cc-65e8-47b9-b63b-639f0e6c2456
	I0912 22:01:30.311000  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.311008  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.311016  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.311025  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.311034  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.311194  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-947523","namespace":"kube-system","uid":"0c533c57-b3e2-461b-ab69-fc5253dc6074","resourceVersion":"289","creationTimestamp":"2023-09-12T22:01:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5670a9664f0c7c79145baba26de8ea87","kubernetes.io/config.mirror":"5670a9664f0c7c79145baba26de8ea87","kubernetes.io/config.seen":"2023-09-12T22:01:03.833981869Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0912 22:01:30.508700  106287 request.go:629] Waited for 197.096304ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:30.508765  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:30.508772  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.508779  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.508789  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.511012  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:30.511030  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.511037  106287 round_trippers.go:580]     Audit-Id: 16e1c5e6-8373-4f25-832b-e4cff7a94c30
	I0912 22:01:30.511042  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.511048  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.511056  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.511067  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.511078  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.511467  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 5947 chars]
	I0912 22:01:30.512017  106287 pod_ready.go:92] pod "kube-scheduler-multinode-947523" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:30.512066  106287 pod_ready.go:81] duration metric: took 379.654414ms waiting for pod "kube-scheduler-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:30.512089  106287 pod_ready.go:38] duration metric: took 10.935884473s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:01:30.512122  106287 api_server.go:52] waiting for apiserver process to appear ...
	I0912 22:01:30.512202  106287 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:01:30.523242  106287 command_runner.go:130] > 1427
	I0912 22:01:30.523288  106287 api_server.go:72] duration metric: took 14.048646072s to wait for apiserver process to appear ...
	I0912 22:01:30.523303  106287 api_server.go:88] waiting for apiserver healthz status ...
	I0912 22:01:30.523323  106287 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0912 22:01:30.527354  106287 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0912 22:01:30.527429  106287 round_trippers.go:463] GET https://192.168.58.2:8443/version
	I0912 22:01:30.527440  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.527453  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.527467  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.528420  106287 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0912 22:01:30.528436  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.528442  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.528448  106287 round_trippers.go:580]     Content-Length: 263
	I0912 22:01:30.528453  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.528458  106287 round_trippers.go:580]     Audit-Id: 6a578fdc-7339-4171-9aa4-eccfd66edff4
	I0912 22:01:30.528464  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.528471  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.528476  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.528494  106287 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0912 22:01:30.528574  106287 api_server.go:141] control plane version: v1.28.1
	I0912 22:01:30.528587  106287 api_server.go:131] duration metric: took 5.278597ms to wait for apiserver health ...
	I0912 22:01:30.528614  106287 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:01:30.708995  106287 request.go:629] Waited for 180.317055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0912 22:01:30.709065  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0912 22:01:30.709076  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.709088  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.709102  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.712195  106287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 22:01:30.712220  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.712228  106287 round_trippers.go:580]     Audit-Id: 9b0d0c32-ac62-48ca-8977-1e179c8a7d74
	I0912 22:01:30.712233  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.712238  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.712243  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.712249  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.712254  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.712815  106287 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6q54t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43187880-a314-47f3-b42a-608882b6043b","resourceVersion":"400","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62868 chars]
	I0912 22:01:30.714774  106287 system_pods.go:59] 9 kube-system pods found
	I0912 22:01:30.714802  106287 system_pods.go:61] "coredns-5dd5756b68-6q54t" [43187880-a314-47f3-b42a-608882b6043b] Running
	I0912 22:01:30.714810  106287 system_pods.go:61] "coredns-5dd5756b68-m8mcv" [a925e809-5cce-4008-870d-3de1b67bbe83] Running
	I0912 22:01:30.714815  106287 system_pods.go:61] "etcd-multinode-947523" [f4d30e28-adde-4a67-9b29-0029ad5d3239] Running
	I0912 22:01:30.714822  106287 system_pods.go:61] "kindnet-947mb" [2122d504-c10c-4ec4-91bb-ba91cca8f5e6] Running
	I0912 22:01:30.714829  106287 system_pods.go:61] "kube-apiserver-multinode-947523" [06229ad2-51aa-408c-9fba-049fdaa4cf47] Running
	I0912 22:01:30.714841  106287 system_pods.go:61] "kube-controller-manager-multinode-947523" [342d1648-c610-467f-91d4-f47bb5c83634] Running
	I0912 22:01:30.714848  106287 system_pods.go:61] "kube-proxy-p2j8w" [cc0d0912-c416-4d26-9520-8e414702468f] Running
	I0912 22:01:30.714855  106287 system_pods.go:61] "kube-scheduler-multinode-947523" [0c533c57-b3e2-461b-ab69-fc5253dc6074] Running
	I0912 22:01:30.714862  106287 system_pods.go:61] "storage-provisioner" [7feda27b-75bb-445b-8ed7-331ebce33a72] Running
	I0912 22:01:30.714871  106287 system_pods.go:74] duration metric: took 186.246797ms to wait for pod list to return data ...
	I0912 22:01:30.714884  106287 default_sa.go:34] waiting for default service account to be created ...
	I0912 22:01:30.909286  106287 request.go:629] Waited for 194.332987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0912 22:01:30.909335  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/default/serviceaccounts
	I0912 22:01:30.909340  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:30.909360  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:30.909366  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:30.911805  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:30.911827  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:30.911834  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:30.911839  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:30.911846  106287 round_trippers.go:580]     Content-Length: 261
	I0912 22:01:30.911851  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:30 GMT
	I0912 22:01:30.911857  106287 round_trippers.go:580]     Audit-Id: 5a315888-2769-47d1-b643-1339005493df
	I0912 22:01:30.911862  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:30.911867  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:30.911891  106287 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"505380ae-d64b-486d-aecf-1cf5fcd497a2","resourceVersion":"319","creationTimestamp":"2023-09-12T22:01:16Z"}}]}
	I0912 22:01:30.912098  106287 default_sa.go:45] found service account: "default"
	I0912 22:01:30.912113  106287 default_sa.go:55] duration metric: took 197.223807ms for default service account to be created ...
	I0912 22:01:30.912121  106287 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 22:01:31.109537  106287 request.go:629] Waited for 197.359448ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0912 22:01:31.109629  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0912 22:01:31.109640  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:31.109655  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:31.109675  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:31.113322  106287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 22:01:31.113344  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:31.113353  106287 round_trippers.go:580]     Audit-Id: 2f4279a7-2f3f-49b3-915e-c9e219d13258
	I0912 22:01:31.113361  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:31.113368  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:31.113374  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:31.113381  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:31.113389  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:31 GMT
	I0912 22:01:31.113868  106287 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6q54t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43187880-a314-47f3-b42a-608882b6043b","resourceVersion":"400","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 62868 chars]
	I0912 22:01:31.115757  106287 system_pods.go:86] 9 kube-system pods found
	I0912 22:01:31.115780  106287 system_pods.go:89] "coredns-5dd5756b68-6q54t" [43187880-a314-47f3-b42a-608882b6043b] Running
	I0912 22:01:31.115787  106287 system_pods.go:89] "coredns-5dd5756b68-m8mcv" [a925e809-5cce-4008-870d-3de1b67bbe83] Running
	I0912 22:01:31.115794  106287 system_pods.go:89] "etcd-multinode-947523" [f4d30e28-adde-4a67-9b29-0029ad5d3239] Running
	I0912 22:01:31.115799  106287 system_pods.go:89] "kindnet-947mb" [2122d504-c10c-4ec4-91bb-ba91cca8f5e6] Running
	I0912 22:01:31.115806  106287 system_pods.go:89] "kube-apiserver-multinode-947523" [06229ad2-51aa-408c-9fba-049fdaa4cf47] Running
	I0912 22:01:31.115814  106287 system_pods.go:89] "kube-controller-manager-multinode-947523" [342d1648-c610-467f-91d4-f47bb5c83634] Running
	I0912 22:01:31.115825  106287 system_pods.go:89] "kube-proxy-p2j8w" [cc0d0912-c416-4d26-9520-8e414702468f] Running
	I0912 22:01:31.115833  106287 system_pods.go:89] "kube-scheduler-multinode-947523" [0c533c57-b3e2-461b-ab69-fc5253dc6074] Running
	I0912 22:01:31.115845  106287 system_pods.go:89] "storage-provisioner" [7feda27b-75bb-445b-8ed7-331ebce33a72] Running
	I0912 22:01:31.115854  106287 system_pods.go:126] duration metric: took 203.7269ms to wait for k8s-apps to be running ...
	I0912 22:01:31.115867  106287 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 22:01:31.115918  106287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:01:31.126448  106287 system_svc.go:56] duration metric: took 10.573841ms WaitForService to wait for kubelet.
	I0912 22:01:31.126468  106287 kubeadm.go:581] duration metric: took 14.651828194s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 22:01:31.126486  106287 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:01:31.308681  106287 request.go:629] Waited for 182.114958ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0912 22:01:31.308733  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0912 22:01:31.308738  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:31.308745  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:31.308751  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:31.310961  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:31.310982  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:31.310989  106287 round_trippers.go:580]     Audit-Id: 73743729-d852-4cf0-ae7e-4b233b37c584
	I0912 22:01:31.310995  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:31.311000  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:31.311005  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:31.311011  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:31.311016  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:31 GMT
	I0912 22:01:31.311148  106287 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"420"},"items":[{"metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"378","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 6000 chars]
	I0912 22:01:31.311613  106287 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:01:31.311634  106287 node_conditions.go:123] node cpu capacity is 8
	I0912 22:01:31.311660  106287 node_conditions.go:105] duration metric: took 185.155906ms to run NodePressure ...
	I0912 22:01:31.311676  106287 start.go:228] waiting for startup goroutines ...
	I0912 22:01:31.311689  106287 start.go:233] waiting for cluster config update ...
	I0912 22:01:31.311704  106287 start.go:242] writing updated cluster config ...
	I0912 22:01:31.313997  106287 out.go:177] 
	I0912 22:01:31.315437  106287 config.go:182] Loaded profile config "multinode-947523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:01:31.315499  106287 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/config.json ...
	I0912 22:01:31.317103  106287 out.go:177] * Starting worker node multinode-947523-m02 in cluster multinode-947523
	I0912 22:01:31.318373  106287 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 22:01:31.319810  106287 out.go:177] * Pulling base image ...
	I0912 22:01:31.321244  106287 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:01:31.321263  106287 cache.go:57] Caching tarball of preloaded images
	I0912 22:01:31.321316  106287 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 22:01:31.321358  106287 preload.go:174] Found /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:01:31.321370  106287 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0912 22:01:31.321443  106287 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/config.json ...
	I0912 22:01:31.337716  106287 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0912 22:01:31.337738  106287 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	I0912 22:01:31.337757  106287 cache.go:195] Successfully downloaded all kic artifacts
	I0912 22:01:31.337791  106287 start.go:365] acquiring machines lock for multinode-947523-m02: {Name:mkadbb7d1b9d20de81630619374d5289b1e556bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:01:31.337897  106287 start.go:369] acquired machines lock for "multinode-947523-m02" in 86.046µs
	I0912 22:01:31.337927  106287 start.go:93] Provisioning new machine with config: &{Name:multinode-947523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-947523 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mou
nt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0912 22:01:31.338013  106287 start.go:125] createHost starting for "m02" (driver="docker")
	I0912 22:01:31.339768  106287 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0912 22:01:31.339876  106287 start.go:159] libmachine.API.Create for "multinode-947523" (driver="docker")
	I0912 22:01:31.339905  106287 client.go:168] LocalClient.Create starting
	I0912 22:01:31.339982  106287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem
	I0912 22:01:31.340014  106287 main.go:141] libmachine: Decoding PEM data...
	I0912 22:01:31.340029  106287 main.go:141] libmachine: Parsing certificate...
	I0912 22:01:31.340075  106287 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem
	I0912 22:01:31.340094  106287 main.go:141] libmachine: Decoding PEM data...
	I0912 22:01:31.340105  106287 main.go:141] libmachine: Parsing certificate...
	I0912 22:01:31.340285  106287 cli_runner.go:164] Run: docker network inspect multinode-947523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:01:31.355680  106287 network_create.go:76] Found existing network {name:multinode-947523 subnet:0xc0015778f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0912 22:01:31.355722  106287 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-947523-m02" container
	I0912 22:01:31.355778  106287 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 22:01:31.371046  106287 cli_runner.go:164] Run: docker volume create multinode-947523-m02 --label name.minikube.sigs.k8s.io=multinode-947523-m02 --label created_by.minikube.sigs.k8s.io=true
	I0912 22:01:31.388300  106287 oci.go:103] Successfully created a docker volume multinode-947523-m02
	I0912 22:01:31.388410  106287 cli_runner.go:164] Run: docker run --rm --name multinode-947523-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-947523-m02 --entrypoint /usr/bin/test -v multinode-947523-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib
	I0912 22:01:31.980361  106287 oci.go:107] Successfully prepared a docker volume multinode-947523-m02
	I0912 22:01:31.980408  106287 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:01:31.980427  106287 kic.go:190] Starting extracting preloaded images to volume ...
	I0912 22:01:31.980487  106287 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-947523-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 22:01:37.062153  106287 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v multinode-947523-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir: (5.08160972s)
	I0912 22:01:37.062190  106287 kic.go:199] duration metric: took 5.081759 seconds to extract preloaded images to volume
	W0912 22:01:37.062304  106287 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0912 22:01:37.062408  106287 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 22:01:37.111746  106287 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-947523-m02 --name multinode-947523-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-947523-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-947523-m02 --network multinode-947523 --ip 192.168.58.3 --volume multinode-947523-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0912 22:01:37.397320  106287 cli_runner.go:164] Run: docker container inspect multinode-947523-m02 --format={{.State.Running}}
	I0912 22:01:37.415229  106287 cli_runner.go:164] Run: docker container inspect multinode-947523-m02 --format={{.State.Status}}
	I0912 22:01:37.432108  106287 cli_runner.go:164] Run: docker exec multinode-947523-m02 stat /var/lib/dpkg/alternatives/iptables
	I0912 22:01:37.489401  106287 oci.go:144] the created container "multinode-947523-m02" has a running status.
	I0912 22:01:37.489430  106287 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523-m02/id_rsa...
	I0912 22:01:37.534765  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0912 22:01:37.534805  106287 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 22:01:37.553936  106287 cli_runner.go:164] Run: docker container inspect multinode-947523-m02 --format={{.State.Status}}
	I0912 22:01:37.571053  106287 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 22:01:37.571077  106287 kic_runner.go:114] Args: [docker exec --privileged multinode-947523-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 22:01:37.638810  106287 cli_runner.go:164] Run: docker container inspect multinode-947523-m02 --format={{.State.Status}}
	I0912 22:01:37.656143  106287 machine.go:88] provisioning docker machine ...
	I0912 22:01:37.656181  106287 ubuntu.go:169] provisioning hostname "multinode-947523-m02"
	I0912 22:01:37.656244  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523-m02
	I0912 22:01:37.672966  106287 main.go:141] libmachine: Using SSH client type: native
	I0912 22:01:37.673424  106287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0912 22:01:37.673446  106287 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-947523-m02 && echo "multinode-947523-m02" | sudo tee /etc/hostname
	I0912 22:01:37.674020  106287 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42366->127.0.0.1:32852: read: connection reset by peer
	I0912 22:01:40.819058  106287 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-947523-m02
	
	I0912 22:01:40.819143  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523-m02
	I0912 22:01:40.835253  106287 main.go:141] libmachine: Using SSH client type: native
	I0912 22:01:40.835612  106287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0912 22:01:40.835632  106287 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-947523-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-947523-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-947523-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:01:40.968793  106287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:01:40.968826  106287 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17194-15878/.minikube CaCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17194-15878/.minikube}
	I0912 22:01:40.968854  106287 ubuntu.go:177] setting up certificates
	I0912 22:01:40.968873  106287 provision.go:83] configureAuth start
	I0912 22:01:40.968982  106287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-947523-m02
	I0912 22:01:40.984600  106287 provision.go:138] copyHostCerts
	I0912 22:01:40.984654  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 22:01:40.984685  106287 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem, removing ...
	I0912 22:01:40.984694  106287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 22:01:40.984755  106287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem (1082 bytes)
	I0912 22:01:40.984827  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 22:01:40.984844  106287 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem, removing ...
	I0912 22:01:40.984849  106287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 22:01:40.984872  106287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem (1123 bytes)
	I0912 22:01:40.984915  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 22:01:40.984930  106287 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem, removing ...
	I0912 22:01:40.984936  106287 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 22:01:40.984956  106287 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem (1679 bytes)
	I0912 22:01:40.985005  106287 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem org=jenkins.multinode-947523-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-947523-m02]
	I0912 22:01:41.071563  106287 provision.go:172] copyRemoteCerts
	I0912 22:01:41.071623  106287 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:01:41.071657  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523-m02
	I0912 22:01:41.088024  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523-m02/id_rsa Username:docker}
	I0912 22:01:41.184777  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0912 22:01:41.184845  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:01:41.206211  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0912 22:01:41.206281  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0912 22:01:41.227119  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0912 22:01:41.227183  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:01:41.248524  106287 provision.go:86] duration metric: configureAuth took 279.559919ms
	I0912 22:01:41.248555  106287 ubuntu.go:193] setting minikube options for container-runtime
	I0912 22:01:41.248762  106287 config.go:182] Loaded profile config "multinode-947523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:01:41.248869  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523-m02
	I0912 22:01:41.264979  106287 main.go:141] libmachine: Using SSH client type: native
	I0912 22:01:41.265297  106287 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32852 <nil> <nil>}
	I0912 22:01:41.265315  106287 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:01:41.484864  106287 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:01:41.484890  106287 machine.go:91] provisioned docker machine in 3.828726306s
	I0912 22:01:41.484900  106287 client.go:171] LocalClient.Create took 10.14498611s
	I0912 22:01:41.484919  106287 start.go:167] duration metric: libmachine.API.Create for "multinode-947523" took 10.145041628s
	I0912 22:01:41.484930  106287 start.go:300] post-start starting for "multinode-947523-m02" (driver="docker")
	I0912 22:01:41.484942  106287 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:01:41.484993  106287 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:01:41.485031  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523-m02
	I0912 22:01:41.502089  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523-m02/id_rsa Username:docker}
	I0912 22:01:41.597347  106287 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:01:41.600507  106287 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.3 LTS"
	I0912 22:01:41.600532  106287 command_runner.go:130] > NAME="Ubuntu"
	I0912 22:01:41.600542  106287 command_runner.go:130] > VERSION_ID="22.04"
	I0912 22:01:41.600550  106287 command_runner.go:130] > VERSION="22.04.3 LTS (Jammy Jellyfish)"
	I0912 22:01:41.600558  106287 command_runner.go:130] > VERSION_CODENAME=jammy
	I0912 22:01:41.600565  106287 command_runner.go:130] > ID=ubuntu
	I0912 22:01:41.600570  106287 command_runner.go:130] > ID_LIKE=debian
	I0912 22:01:41.600575  106287 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0912 22:01:41.600579  106287 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0912 22:01:41.600585  106287 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0912 22:01:41.600606  106287 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0912 22:01:41.600616  106287 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0912 22:01:41.600675  106287 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 22:01:41.600701  106287 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 22:01:41.600712  106287 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 22:01:41.600721  106287 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0912 22:01:41.600730  106287 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/addons for local assets ...
	I0912 22:01:41.600781  106287 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/files for local assets ...
	I0912 22:01:41.600850  106287 filesync.go:149] local asset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> 226982.pem in /etc/ssl/certs
	I0912 22:01:41.600860  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> /etc/ssl/certs/226982.pem
	I0912 22:01:41.600934  106287 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:01:41.609510  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:01:41.632386  106287 start.go:303] post-start completed in 147.437592ms
	I0912 22:01:41.632772  106287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-947523-m02
	I0912 22:01:41.650145  106287 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/config.json ...
	I0912 22:01:41.650483  106287 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:01:41.650535  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523-m02
	I0912 22:01:41.667298  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523-m02/id_rsa Username:docker}
	I0912 22:01:41.765179  106287 command_runner.go:130] > 23%!
	(MISSING)I0912 22:01:41.765400  106287 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 22:01:41.769514  106287 command_runner.go:130] > 227G
	I0912 22:01:41.769676  106287 start.go:128] duration metric: createHost completed in 10.431652033s
	I0912 22:01:41.769693  106287 start.go:83] releasing machines lock for "multinode-947523-m02", held for 10.431780882s
	I0912 22:01:41.769771  106287 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-947523-m02
	I0912 22:01:41.788833  106287 out.go:177] * Found network options:
	I0912 22:01:41.790363  106287 out.go:177]   - NO_PROXY=192.168.58.2
	W0912 22:01:41.791793  106287 proxy.go:119] fail to check proxy env: Error ip not in block
	W0912 22:01:41.791837  106287 proxy.go:119] fail to check proxy env: Error ip not in block
	I0912 22:01:41.791909  106287 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:01:41.791956  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523-m02
	I0912 22:01:41.791967  106287 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:01:41.792021  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523-m02
	I0912 22:01:41.808786  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523-m02/id_rsa Username:docker}
	I0912 22:01:41.809422  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523-m02/id_rsa Username:docker}
	I0912 22:01:42.035039  106287 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0912 22:01:42.035150  106287 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:01:42.039265  106287 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0912 22:01:42.039291  106287 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0912 22:01:42.039300  106287 command_runner.go:130] > Device: bfh/191d	Inode: 552137      Links: 1
	I0912 22:01:42.039307  106287 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:01:42.039314  106287 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0912 22:01:42.039322  106287 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0912 22:01:42.039332  106287 command_runner.go:130] > Change: 2023-09-12 21:43:42.991834785 +0000
	I0912 22:01:42.039346  106287 command_runner.go:130] >  Birth: 2023-09-12 21:43:42.991834785 +0000
	I0912 22:01:42.039405  106287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:01:42.056324  106287 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0912 22:01:42.056408  106287 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:01:42.083241  106287 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0912 22:01:42.083301  106287 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0912 22:01:42.083313  106287 start.go:469] detecting cgroup driver to use...
	I0912 22:01:42.083352  106287 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0912 22:01:42.083391  106287 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:01:42.097457  106287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:01:42.107323  106287 docker.go:196] disabling cri-docker service (if available) ...
	I0912 22:01:42.107379  106287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:01:42.120508  106287 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:01:42.134543  106287 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:01:42.210165  106287 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:01:42.291380  106287 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0912 22:01:42.291408  106287 docker.go:212] disabling docker service ...
	I0912 22:01:42.291451  106287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:01:42.308254  106287 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:01:42.318480  106287 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:01:42.398776  106287 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0912 22:01:42.398836  106287 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:01:42.409381  106287 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0912 22:01:42.478987  106287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:01:42.489039  106287 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:01:42.502108  106287 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0912 22:01:42.502835  106287 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0912 22:01:42.502905  106287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:01:42.511517  106287 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:01:42.511583  106287 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:01:42.519955  106287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:01:42.528724  106287 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:01:42.537710  106287 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:01:42.545786  106287 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:01:42.552463  106287 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0912 22:01:42.553102  106287 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:01:42.560576  106287 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:01:42.636402  106287 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:01:42.733132  106287 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:01:42.733214  106287 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:01:42.736487  106287 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0912 22:01:42.736521  106287 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0912 22:01:42.736530  106287 command_runner.go:130] > Device: c8h/200d	Inode: 190         Links: 1
	I0912 22:01:42.736540  106287 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:01:42.736549  106287 command_runner.go:130] > Access: 2023-09-12 22:01:42.720471001 +0000
	I0912 22:01:42.736567  106287 command_runner.go:130] > Modify: 2023-09-12 22:01:42.720471001 +0000
	I0912 22:01:42.736581  106287 command_runner.go:130] > Change: 2023-09-12 22:01:42.720471001 +0000
	I0912 22:01:42.736588  106287 command_runner.go:130] >  Birth: -
	I0912 22:01:42.736672  106287 start.go:537] Will wait 60s for crictl version
	I0912 22:01:42.736721  106287 ssh_runner.go:195] Run: which crictl
	I0912 22:01:42.739814  106287 command_runner.go:130] > /usr/bin/crictl
	I0912 22:01:42.739883  106287 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:01:42.771887  106287 command_runner.go:130] > Version:  0.1.0
	I0912 22:01:42.771913  106287 command_runner.go:130] > RuntimeName:  cri-o
	I0912 22:01:42.771919  106287 command_runner.go:130] > RuntimeVersion:  1.24.6
	I0912 22:01:42.771924  106287 command_runner.go:130] > RuntimeApiVersion:  v1
	I0912 22:01:42.771942  106287 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0912 22:01:42.772005  106287 ssh_runner.go:195] Run: crio --version
	I0912 22:01:42.803789  106287 command_runner.go:130] > crio version 1.24.6
	I0912 22:01:42.803809  106287 command_runner.go:130] > Version:          1.24.6
	I0912 22:01:42.803816  106287 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0912 22:01:42.803820  106287 command_runner.go:130] > GitTreeState:     clean
	I0912 22:01:42.803826  106287 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0912 22:01:42.803831  106287 command_runner.go:130] > GoVersion:        go1.18.2
	I0912 22:01:42.803835  106287 command_runner.go:130] > Compiler:         gc
	I0912 22:01:42.803839  106287 command_runner.go:130] > Platform:         linux/amd64
	I0912 22:01:42.803845  106287 command_runner.go:130] > Linkmode:         dynamic
	I0912 22:01:42.803852  106287 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0912 22:01:42.803860  106287 command_runner.go:130] > SeccompEnabled:   true
	I0912 22:01:42.803865  106287 command_runner.go:130] > AppArmorEnabled:  false
	I0912 22:01:42.805147  106287 ssh_runner.go:195] Run: crio --version
	I0912 22:01:42.836132  106287 command_runner.go:130] > crio version 1.24.6
	I0912 22:01:42.836158  106287 command_runner.go:130] > Version:          1.24.6
	I0912 22:01:42.836167  106287 command_runner.go:130] > GitCommit:        4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90
	I0912 22:01:42.836172  106287 command_runner.go:130] > GitTreeState:     clean
	I0912 22:01:42.836189  106287 command_runner.go:130] > BuildDate:        2023-06-14T14:44:50Z
	I0912 22:01:42.836197  106287 command_runner.go:130] > GoVersion:        go1.18.2
	I0912 22:01:42.836205  106287 command_runner.go:130] > Compiler:         gc
	I0912 22:01:42.836216  106287 command_runner.go:130] > Platform:         linux/amd64
	I0912 22:01:42.836225  106287 command_runner.go:130] > Linkmode:         dynamic
	I0912 22:01:42.836240  106287 command_runner.go:130] > BuildTags:        apparmor, exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0912 22:01:42.836251  106287 command_runner.go:130] > SeccompEnabled:   true
	I0912 22:01:42.836263  106287 command_runner.go:130] > AppArmorEnabled:  false
	I0912 22:01:42.839497  106287 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0912 22:01:42.840810  106287 out.go:177]   - env NO_PROXY=192.168.58.2
	I0912 22:01:42.841975  106287 cli_runner.go:164] Run: docker network inspect multinode-947523 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:01:42.857725  106287 ssh_runner.go:195] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0912 22:01:42.861122  106287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:01:42.870803  106287 certs.go:56] Setting up /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523 for IP: 192.168.58.3
	I0912 22:01:42.870829  106287 certs.go:190] acquiring lock for shared ca certs: {Name:mk61327f1fa12512fba6a15661f030034d23bf2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:01:42.870967  106287 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key
	I0912 22:01:42.871024  106287 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key
	I0912 22:01:42.871038  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0912 22:01:42.871050  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0912 22:01:42.871062  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0912 22:01:42.871079  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0912 22:01:42.871150  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem (1338 bytes)
	W0912 22:01:42.871185  106287 certs.go:433] ignoring /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698_empty.pem, impossibly tiny 0 bytes
	I0912 22:01:42.871200  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 22:01:42.871238  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:01:42.871274  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:01:42.871308  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem (1679 bytes)
	I0912 22:01:42.871364  106287 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:01:42.871387  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:01:42.871400  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem -> /usr/share/ca-certificates/22698.pem
	I0912 22:01:42.871412  106287 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> /usr/share/ca-certificates/226982.pem
	I0912 22:01:42.871789  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:01:42.892698  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:01:42.913397  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:01:42.934775  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 22:01:42.956083  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:01:42.977434  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem --> /usr/share/ca-certificates/22698.pem (1338 bytes)
	I0912 22:01:42.998768  106287 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /usr/share/ca-certificates/226982.pem (1708 bytes)
	I0912 22:01:43.020287  106287 ssh_runner.go:195] Run: openssl version
	I0912 22:01:43.025195  106287 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0912 22:01:43.025273  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:01:43.033663  106287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:01:43.036963  106287 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:01:43.036993  106287 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:01:43.037036  106287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:01:43.043057  106287 command_runner.go:130] > b5213941
	I0912 22:01:43.043285  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:01:43.051988  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22698.pem && ln -fs /usr/share/ca-certificates/22698.pem /etc/ssl/certs/22698.pem"
	I0912 22:01:43.060520  106287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22698.pem
	I0912 22:01:43.063600  106287 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 12 21:49 /usr/share/ca-certificates/22698.pem
	I0912 22:01:43.063642  106287 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:49 /usr/share/ca-certificates/22698.pem
	I0912 22:01:43.063685  106287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22698.pem
	I0912 22:01:43.069768  106287 command_runner.go:130] > 51391683
	I0912 22:01:43.069832  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22698.pem /etc/ssl/certs/51391683.0"
	I0912 22:01:43.078154  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/226982.pem && ln -fs /usr/share/ca-certificates/226982.pem /etc/ssl/certs/226982.pem"
	I0912 22:01:43.086569  106287 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/226982.pem
	I0912 22:01:43.089726  106287 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 12 21:49 /usr/share/ca-certificates/226982.pem
	I0912 22:01:43.089767  106287 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:49 /usr/share/ca-certificates/226982.pem
	I0912 22:01:43.089815  106287 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/226982.pem
	I0912 22:01:43.095739  106287 command_runner.go:130] > 3ec20f2e
	I0912 22:01:43.095964  106287 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/226982.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:01:43.104193  106287 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 22:01:43.107201  106287 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 22:01:43.107247  106287 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 22:01:43.107322  106287 ssh_runner.go:195] Run: crio config
	I0912 22:01:43.140684  106287 command_runner.go:130] ! time="2023-09-12 22:01:43.140284127Z" level=info msg="Starting CRI-O, version: 1.24.6, git: 4bfe15a9feb74ffc95e66a21c04b15fa7bbc2b90(clean)"
	I0912 22:01:43.140724  106287 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0912 22:01:43.145009  106287 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0912 22:01:43.145030  106287 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0912 22:01:43.145037  106287 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0912 22:01:43.145041  106287 command_runner.go:130] > #
	I0912 22:01:43.145047  106287 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0912 22:01:43.145053  106287 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0912 22:01:43.145059  106287 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0912 22:01:43.145066  106287 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0912 22:01:43.145079  106287 command_runner.go:130] > # reload'.
	I0912 22:01:43.145097  106287 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0912 22:01:43.145112  106287 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0912 22:01:43.145122  106287 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0912 22:01:43.145130  106287 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0912 22:01:43.145136  106287 command_runner.go:130] > [crio]
	I0912 22:01:43.145143  106287 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0912 22:01:43.145150  106287 command_runner.go:130] > # containers images, in this directory.
	I0912 22:01:43.145160  106287 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I0912 22:01:43.145174  106287 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0912 22:01:43.145187  106287 command_runner.go:130] > # runroot = "/tmp/containers-user-1000/containers"
	I0912 22:01:43.145200  106287 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0912 22:01:43.145214  106287 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0912 22:01:43.145224  106287 command_runner.go:130] > # storage_driver = "vfs"
	I0912 22:01:43.145232  106287 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0912 22:01:43.145240  106287 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0912 22:01:43.145244  106287 command_runner.go:130] > # storage_option = [
	I0912 22:01:43.145253  106287 command_runner.go:130] > # ]
	I0912 22:01:43.145267  106287 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0912 22:01:43.145281  106287 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0912 22:01:43.145292  106287 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0912 22:01:43.145304  106287 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0912 22:01:43.145317  106287 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0912 22:01:43.145324  106287 command_runner.go:130] > # always happen on a node reboot
	I0912 22:01:43.145330  106287 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0912 22:01:43.145343  106287 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0912 22:01:43.145362  106287 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0912 22:01:43.145382  106287 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0912 22:01:43.145394  106287 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0912 22:01:43.145408  106287 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0912 22:01:43.145423  106287 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0912 22:01:43.145435  106287 command_runner.go:130] > # internal_wipe = true
	I0912 22:01:43.145448  106287 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0912 22:01:43.145461  106287 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0912 22:01:43.145473  106287 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0912 22:01:43.145485  106287 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0912 22:01:43.145496  106287 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0912 22:01:43.145502  106287 command_runner.go:130] > [crio.api]
	I0912 22:01:43.145511  106287 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0912 22:01:43.145523  106287 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0912 22:01:43.145535  106287 command_runner.go:130] > # IP address on which the stream server will listen.
	I0912 22:01:43.145546  106287 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0912 22:01:43.145559  106287 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0912 22:01:43.145570  106287 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0912 22:01:43.145585  106287 command_runner.go:130] > # stream_port = "0"
	I0912 22:01:43.145597  106287 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0912 22:01:43.145609  106287 command_runner.go:130] > # stream_enable_tls = false
	I0912 22:01:43.145622  106287 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0912 22:01:43.145633  106287 command_runner.go:130] > # stream_idle_timeout = ""
	I0912 22:01:43.145646  106287 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0912 22:01:43.145657  106287 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0912 22:01:43.145665  106287 command_runner.go:130] > # minutes.
	I0912 22:01:43.145672  106287 command_runner.go:130] > # stream_tls_cert = ""
	I0912 22:01:43.145683  106287 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0912 22:01:43.145697  106287 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0912 22:01:43.145707  106287 command_runner.go:130] > # stream_tls_key = ""
	I0912 22:01:43.145720  106287 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0912 22:01:43.145734  106287 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0912 22:01:43.145746  106287 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0912 22:01:43.145764  106287 command_runner.go:130] > # stream_tls_ca = ""
	I0912 22:01:43.145781  106287 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0912 22:01:43.145792  106287 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I0912 22:01:43.145823  106287 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0912 22:01:43.145838  106287 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I0912 22:01:43.145868  106287 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0912 22:01:43.145881  106287 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0912 22:01:43.145891  106287 command_runner.go:130] > [crio.runtime]
	I0912 22:01:43.145904  106287 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0912 22:01:43.145916  106287 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0912 22:01:43.145925  106287 command_runner.go:130] > # "nofile=1024:2048"
	I0912 22:01:43.145932  106287 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0912 22:01:43.145941  106287 command_runner.go:130] > # default_ulimits = [
	I0912 22:01:43.145950  106287 command_runner.go:130] > # ]
	I0912 22:01:43.145964  106287 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0912 22:01:43.145974  106287 command_runner.go:130] > # no_pivot = false
	I0912 22:01:43.145987  106287 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0912 22:01:43.146000  106287 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0912 22:01:43.146011  106287 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0912 22:01:43.146016  106287 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0912 22:01:43.146027  106287 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0912 22:01:43.146045  106287 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0912 22:01:43.146055  106287 command_runner.go:130] > # conmon = ""
	I0912 22:01:43.146066  106287 command_runner.go:130] > # Cgroup setting for conmon
	I0912 22:01:43.146080  106287 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0912 22:01:43.146094  106287 command_runner.go:130] > conmon_cgroup = "pod"
	I0912 22:01:43.146101  106287 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0912 22:01:43.146111  106287 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0912 22:01:43.146126  106287 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0912 22:01:43.146136  106287 command_runner.go:130] > # conmon_env = [
	I0912 22:01:43.146145  106287 command_runner.go:130] > # ]
	I0912 22:01:43.146157  106287 command_runner.go:130] > # Additional environment variables to set for all the
	I0912 22:01:43.146169  106287 command_runner.go:130] > # containers. These are overridden if set in the
	I0912 22:01:43.146181  106287 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0912 22:01:43.146188  106287 command_runner.go:130] > # default_env = [
	I0912 22:01:43.146192  106287 command_runner.go:130] > # ]
	I0912 22:01:43.146205  106287 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0912 22:01:43.146216  106287 command_runner.go:130] > # selinux = false
	I0912 22:01:43.146230  106287 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0912 22:01:43.146247  106287 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0912 22:01:43.146259  106287 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0912 22:01:43.146268  106287 command_runner.go:130] > # seccomp_profile = ""
	I0912 22:01:43.146274  106287 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0912 22:01:43.146286  106287 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0912 22:01:43.146300  106287 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0912 22:01:43.146310  106287 command_runner.go:130] > # which might increase security.
	I0912 22:01:43.146321  106287 command_runner.go:130] > # seccomp_use_default_when_empty = true
	I0912 22:01:43.146334  106287 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0912 22:01:43.146347  106287 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0912 22:01:43.146357  106287 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0912 22:01:43.146370  106287 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0912 22:01:43.146382  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:01:43.146394  106287 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0912 22:01:43.146406  106287 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0912 22:01:43.146417  106287 command_runner.go:130] > # the cgroup blockio controller.
	I0912 22:01:43.146427  106287 command_runner.go:130] > # blockio_config_file = ""
	I0912 22:01:43.146439  106287 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0912 22:01:43.146449  106287 command_runner.go:130] > # irqbalance daemon.
	I0912 22:01:43.146460  106287 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0912 22:01:43.146475  106287 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0912 22:01:43.146487  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:01:43.146497  106287 command_runner.go:130] > # rdt_config_file = ""
	I0912 22:01:43.146509  106287 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0912 22:01:43.146520  106287 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0912 22:01:43.146530  106287 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0912 22:01:43.146538  106287 command_runner.go:130] > # separate_pull_cgroup = ""
	I0912 22:01:43.146552  106287 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0912 22:01:43.146566  106287 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0912 22:01:43.146576  106287 command_runner.go:130] > # will be added.
	I0912 22:01:43.146586  106287 command_runner.go:130] > # default_capabilities = [
	I0912 22:01:43.146595  106287 command_runner.go:130] > # 	"CHOWN",
	I0912 22:01:43.146605  106287 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0912 22:01:43.146611  106287 command_runner.go:130] > # 	"FSETID",
	I0912 22:01:43.146617  106287 command_runner.go:130] > # 	"FOWNER",
	I0912 22:01:43.146622  106287 command_runner.go:130] > # 	"SETGID",
	I0912 22:01:43.146635  106287 command_runner.go:130] > # 	"SETUID",
	I0912 22:01:43.146646  106287 command_runner.go:130] > # 	"SETPCAP",
	I0912 22:01:43.146657  106287 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0912 22:01:43.146666  106287 command_runner.go:130] > # 	"KILL",
	I0912 22:01:43.146675  106287 command_runner.go:130] > # ]
	I0912 22:01:43.146690  106287 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0912 22:01:43.146700  106287 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0912 22:01:43.146709  106287 command_runner.go:130] > # add_inheritable_capabilities = true
	I0912 22:01:43.146723  106287 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0912 22:01:43.146737  106287 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0912 22:01:43.146747  106287 command_runner.go:130] > # default_sysctls = [
	I0912 22:01:43.146755  106287 command_runner.go:130] > # ]
	I0912 22:01:43.146766  106287 command_runner.go:130] > # List of devices on the host that a
	I0912 22:01:43.146779  106287 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0912 22:01:43.146787  106287 command_runner.go:130] > # allowed_devices = [
	I0912 22:01:43.146797  106287 command_runner.go:130] > # 	"/dev/fuse",
	I0912 22:01:43.146806  106287 command_runner.go:130] > # ]
	I0912 22:01:43.146818  106287 command_runner.go:130] > # List of additional devices. specified as
	I0912 22:01:43.146867  106287 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0912 22:01:43.146876  106287 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0912 22:01:43.146886  106287 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0912 22:01:43.146897  106287 command_runner.go:130] > # additional_devices = [
	I0912 22:01:43.146906  106287 command_runner.go:130] > # ]
	I0912 22:01:43.146918  106287 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0912 22:01:43.146927  106287 command_runner.go:130] > # cdi_spec_dirs = [
	I0912 22:01:43.146936  106287 command_runner.go:130] > # 	"/etc/cdi",
	I0912 22:01:43.146946  106287 command_runner.go:130] > # 	"/var/run/cdi",
	I0912 22:01:43.146954  106287 command_runner.go:130] > # ]
	I0912 22:01:43.146960  106287 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0912 22:01:43.146975  106287 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0912 22:01:43.146986  106287 command_runner.go:130] > # Defaults to false.
	I0912 22:01:43.146998  106287 command_runner.go:130] > # device_ownership_from_security_context = false
	I0912 22:01:43.147011  106287 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0912 22:01:43.147024  106287 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0912 22:01:43.147033  106287 command_runner.go:130] > # hooks_dir = [
	I0912 22:01:43.147042  106287 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0912 22:01:43.147048  106287 command_runner.go:130] > # ]
	I0912 22:01:43.147063  106287 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0912 22:01:43.147078  106287 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0912 22:01:43.147093  106287 command_runner.go:130] > # its default mounts from the following two files:
	I0912 22:01:43.147102  106287 command_runner.go:130] > #
	I0912 22:01:43.147115  106287 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0912 22:01:43.147126  106287 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0912 22:01:43.147133  106287 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0912 22:01:43.147142  106287 command_runner.go:130] > #
	I0912 22:01:43.147156  106287 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0912 22:01:43.147170  106287 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0912 22:01:43.147183  106287 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0912 22:01:43.147195  106287 command_runner.go:130] > #      only add mounts it finds in this file.
	I0912 22:01:43.147203  106287 command_runner.go:130] > #
	I0912 22:01:43.147212  106287 command_runner.go:130] > # default_mounts_file = ""
	I0912 22:01:43.147220  106287 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0912 22:01:43.147234  106287 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0912 22:01:43.147245  106287 command_runner.go:130] > # pids_limit = 0
	I0912 22:01:43.147262  106287 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0912 22:01:43.147275  106287 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0912 22:01:43.147288  106287 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0912 22:01:43.147300  106287 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0912 22:01:43.147308  106287 command_runner.go:130] > # log_size_max = -1
	I0912 22:01:43.147323  106287 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0912 22:01:43.147334  106287 command_runner.go:130] > # log_to_journald = false
	I0912 22:01:43.147347  106287 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0912 22:01:43.147358  106287 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0912 22:01:43.147370  106287 command_runner.go:130] > # Path to directory for container attach sockets.
	I0912 22:01:43.147381  106287 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0912 22:01:43.147389  106287 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0912 22:01:43.147398  106287 command_runner.go:130] > # bind_mount_prefix = ""
	I0912 22:01:43.147416  106287 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0912 22:01:43.147427  106287 command_runner.go:130] > # read_only = false
	I0912 22:01:43.147440  106287 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0912 22:01:43.147454  106287 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0912 22:01:43.147464  106287 command_runner.go:130] > # live configuration reload.
	I0912 22:01:43.147474  106287 command_runner.go:130] > # log_level = "info"
	I0912 22:01:43.147487  106287 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0912 22:01:43.147499  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:01:43.147509  106287 command_runner.go:130] > # log_filter = ""
	I0912 22:01:43.147522  106287 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0912 22:01:43.147535  106287 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0912 22:01:43.147547  106287 command_runner.go:130] > # separated by comma.
	I0912 22:01:43.147555  106287 command_runner.go:130] > # uid_mappings = ""
	I0912 22:01:43.147565  106287 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0912 22:01:43.147579  106287 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0912 22:01:43.147590  106287 command_runner.go:130] > # separated by comma.
	I0912 22:01:43.147600  106287 command_runner.go:130] > # gid_mappings = ""
	I0912 22:01:43.147613  106287 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0912 22:01:43.147627  106287 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0912 22:01:43.147639  106287 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0912 22:01:43.147644  106287 command_runner.go:130] > # minimum_mappable_uid = -1
	I0912 22:01:43.147655  106287 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0912 22:01:43.147670  106287 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0912 22:01:43.147686  106287 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0912 22:01:43.147696  106287 command_runner.go:130] > # minimum_mappable_gid = -1
	I0912 22:01:43.147709  106287 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0912 22:01:43.147722  106287 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0912 22:01:43.147731  106287 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0912 22:01:43.147740  106287 command_runner.go:130] > # ctr_stop_timeout = 30
	I0912 22:01:43.147753  106287 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0912 22:01:43.147770  106287 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0912 22:01:43.147782  106287 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0912 22:01:43.147793  106287 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0912 22:01:43.147804  106287 command_runner.go:130] > # drop_infra_ctr = true
	I0912 22:01:43.147814  106287 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0912 22:01:43.147825  106287 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0912 22:01:43.147840  106287 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0912 22:01:43.147851  106287 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0912 22:01:43.147864  106287 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0912 22:01:43.147875  106287 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0912 22:01:43.147884  106287 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0912 22:01:43.147912  106287 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0912 22:01:43.147923  106287 command_runner.go:130] > # pinns_path = ""
	I0912 22:01:43.147937  106287 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0912 22:01:43.147950  106287 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0912 22:01:43.147964  106287 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0912 22:01:43.147977  106287 command_runner.go:130] > # default_runtime = "runc"
	I0912 22:01:43.147986  106287 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0912 22:01:43.147999  106287 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0912 22:01:43.148017  106287 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0912 22:01:43.148029  106287 command_runner.go:130] > # creation as a file is not desired either.
	I0912 22:01:43.148045  106287 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0912 22:01:43.148058  106287 command_runner.go:130] > # the hostname is being managed dynamically.
	I0912 22:01:43.148067  106287 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0912 22:01:43.148074  106287 command_runner.go:130] > # ]
	I0912 22:01:43.148087  106287 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0912 22:01:43.148102  106287 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0912 22:01:43.148116  106287 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0912 22:01:43.148129  106287 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0912 22:01:43.148140  106287 command_runner.go:130] > #
	I0912 22:01:43.148151  106287 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0912 22:01:43.148159  106287 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0912 22:01:43.148167  106287 command_runner.go:130] > #  runtime_type = "oci"
	I0912 22:01:43.148178  106287 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0912 22:01:43.148190  106287 command_runner.go:130] > #  privileged_without_host_devices = false
	I0912 22:01:43.148200  106287 command_runner.go:130] > #  allowed_annotations = []
	I0912 22:01:43.148209  106287 command_runner.go:130] > # Where:
	I0912 22:01:43.148221  106287 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0912 22:01:43.148235  106287 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0912 22:01:43.148245  106287 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0912 22:01:43.148257  106287 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0912 22:01:43.148268  106287 command_runner.go:130] > #   in $PATH.
	I0912 22:01:43.148284  106287 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0912 22:01:43.148295  106287 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0912 22:01:43.148318  106287 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0912 22:01:43.148326  106287 command_runner.go:130] > #   state.
	I0912 22:01:43.148333  106287 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0912 22:01:43.148350  106287 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0912 22:01:43.148364  106287 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0912 22:01:43.148377  106287 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0912 22:01:43.148391  106287 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0912 22:01:43.148405  106287 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0912 22:01:43.148414  106287 command_runner.go:130] > #   The currently recognized values are:
	I0912 22:01:43.148426  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0912 22:01:43.148441  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0912 22:01:43.148456  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0912 22:01:43.148470  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0912 22:01:43.148485  106287 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0912 22:01:43.148503  106287 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0912 22:01:43.148515  106287 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0912 22:01:43.148530  106287 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0912 22:01:43.148542  106287 command_runner.go:130] > #   should be moved to the container's cgroup
	I0912 22:01:43.148552  106287 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0912 22:01:43.148561  106287 command_runner.go:130] > runtime_path = "/usr/lib/cri-o-runc/sbin/runc"
	I0912 22:01:43.148570  106287 command_runner.go:130] > runtime_type = "oci"
	I0912 22:01:43.148584  106287 command_runner.go:130] > runtime_root = "/run/runc"
	I0912 22:01:43.148603  106287 command_runner.go:130] > runtime_config_path = ""
	I0912 22:01:43.148610  106287 command_runner.go:130] > monitor_path = ""
	I0912 22:01:43.148616  106287 command_runner.go:130] > monitor_cgroup = ""
	I0912 22:01:43.148630  106287 command_runner.go:130] > monitor_exec_cgroup = ""
	I0912 22:01:43.148691  106287 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0912 22:01:43.148702  106287 command_runner.go:130] > # running containers
	I0912 22:01:43.148707  106287 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0912 22:01:43.148715  106287 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0912 22:01:43.148730  106287 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0912 22:01:43.148743  106287 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0912 22:01:43.148755  106287 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0912 22:01:43.148764  106287 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0912 22:01:43.148775  106287 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0912 22:01:43.148785  106287 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0912 22:01:43.148794  106287 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0912 22:01:43.148799  106287 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0912 22:01:43.148810  106287 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0912 22:01:43.148827  106287 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0912 22:01:43.148841  106287 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0912 22:01:43.148856  106287 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0912 22:01:43.148872  106287 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0912 22:01:43.148881  106287 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0912 22:01:43.148895  106287 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0912 22:01:43.148911  106287 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0912 22:01:43.148924  106287 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0912 22:01:43.148939  106287 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0912 22:01:43.148949  106287 command_runner.go:130] > # Example:
	I0912 22:01:43.148957  106287 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0912 22:01:43.148965  106287 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0912 22:01:43.148972  106287 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0912 22:01:43.148985  106287 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0912 22:01:43.148995  106287 command_runner.go:130] > # cpuset = 0
	I0912 22:01:43.149002  106287 command_runner.go:130] > # cpushares = "0-1"
	I0912 22:01:43.149012  106287 command_runner.go:130] > # Where:
	I0912 22:01:43.149021  106287 command_runner.go:130] > # The workload name is workload-type.
	I0912 22:01:43.149038  106287 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0912 22:01:43.149048  106287 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0912 22:01:43.149054  106287 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0912 22:01:43.149070  106287 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0912 22:01:43.149088  106287 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0912 22:01:43.149097  106287 command_runner.go:130] > # 
	I0912 22:01:43.149108  106287 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0912 22:01:43.149117  106287 command_runner.go:130] > #
	I0912 22:01:43.149127  106287 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0912 22:01:43.149137  106287 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0912 22:01:43.149149  106287 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0912 22:01:43.149164  106287 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0912 22:01:43.149177  106287 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0912 22:01:43.149186  106287 command_runner.go:130] > [crio.image]
	I0912 22:01:43.149199  106287 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0912 22:01:43.149210  106287 command_runner.go:130] > # default_transport = "docker://"
	I0912 22:01:43.149220  106287 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0912 22:01:43.149228  106287 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0912 22:01:43.149243  106287 command_runner.go:130] > # global_auth_file = ""
	I0912 22:01:43.149256  106287 command_runner.go:130] > # The image used to instantiate infra containers.
	I0912 22:01:43.149267  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:01:43.149278  106287 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0912 22:01:43.149292  106287 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0912 22:01:43.149304  106287 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0912 22:01:43.149312  106287 command_runner.go:130] > # This option supports live configuration reload.
	I0912 22:01:43.149316  106287 command_runner.go:130] > # pause_image_auth_file = ""
	I0912 22:01:43.149324  106287 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0912 22:01:43.149333  106287 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0912 22:01:43.149346  106287 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0912 22:01:43.149359  106287 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0912 22:01:43.149370  106287 command_runner.go:130] > # pause_command = "/pause"
	I0912 22:01:43.149383  106287 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0912 22:01:43.149397  106287 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0912 22:01:43.149409  106287 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0912 22:01:43.149418  106287 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0912 22:01:43.149426  106287 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0912 22:01:43.149435  106287 command_runner.go:130] > # signature_policy = ""
	I0912 22:01:43.149447  106287 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0912 22:01:43.149455  106287 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0912 22:01:43.149462  106287 command_runner.go:130] > # changing them here.
	I0912 22:01:43.149466  106287 command_runner.go:130] > # insecure_registries = [
	I0912 22:01:43.149471  106287 command_runner.go:130] > # ]
	I0912 22:01:43.149478  106287 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0912 22:01:43.149489  106287 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0912 22:01:43.149501  106287 command_runner.go:130] > # image_volumes = "mkdir"
	I0912 22:01:43.149513  106287 command_runner.go:130] > # Temporary directory to use for storing big files
	I0912 22:01:43.149524  106287 command_runner.go:130] > # big_files_temporary_dir = ""
	I0912 22:01:43.149537  106287 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0912 22:01:43.149547  106287 command_runner.go:130] > # CNI plugins.
	I0912 22:01:43.149556  106287 command_runner.go:130] > [crio.network]
	I0912 22:01:43.149562  106287 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0912 22:01:43.149570  106287 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0912 22:01:43.149577  106287 command_runner.go:130] > # cni_default_network = ""
	I0912 22:01:43.149582  106287 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0912 22:01:43.149592  106287 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0912 22:01:43.149601  106287 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0912 22:01:43.149608  106287 command_runner.go:130] > # plugin_dirs = [
	I0912 22:01:43.149612  106287 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0912 22:01:43.149617  106287 command_runner.go:130] > # ]
	I0912 22:01:43.149624  106287 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0912 22:01:43.149632  106287 command_runner.go:130] > [crio.metrics]
	I0912 22:01:43.149640  106287 command_runner.go:130] > # Globally enable or disable metrics support.
	I0912 22:01:43.149644  106287 command_runner.go:130] > # enable_metrics = false
	I0912 22:01:43.149651  106287 command_runner.go:130] > # Specify enabled metrics collectors.
	I0912 22:01:43.149655  106287 command_runner.go:130] > # Per default all metrics are enabled.
	I0912 22:01:43.149664  106287 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0912 22:01:43.149672  106287 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0912 22:01:43.149680  106287 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0912 22:01:43.149686  106287 command_runner.go:130] > # metrics_collectors = [
	I0912 22:01:43.149690  106287 command_runner.go:130] > # 	"operations",
	I0912 22:01:43.149706  106287 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0912 22:01:43.149717  106287 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0912 22:01:43.149731  106287 command_runner.go:130] > # 	"operations_errors",
	I0912 22:01:43.149739  106287 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0912 22:01:43.149743  106287 command_runner.go:130] > # 	"image_pulls_by_name",
	I0912 22:01:43.149750  106287 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0912 22:01:43.149754  106287 command_runner.go:130] > # 	"image_pulls_failures",
	I0912 22:01:43.149761  106287 command_runner.go:130] > # 	"image_pulls_successes",
	I0912 22:01:43.149765  106287 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0912 22:01:43.149771  106287 command_runner.go:130] > # 	"image_layer_reuse",
	I0912 22:01:43.149776  106287 command_runner.go:130] > # 	"containers_oom_total",
	I0912 22:01:43.149782  106287 command_runner.go:130] > # 	"containers_oom",
	I0912 22:01:43.149786  106287 command_runner.go:130] > # 	"processes_defunct",
	I0912 22:01:43.149793  106287 command_runner.go:130] > # 	"operations_total",
	I0912 22:01:43.149797  106287 command_runner.go:130] > # 	"operations_latency_seconds",
	I0912 22:01:43.149804  106287 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0912 22:01:43.149808  106287 command_runner.go:130] > # 	"operations_errors_total",
	I0912 22:01:43.149815  106287 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0912 22:01:43.149820  106287 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0912 22:01:43.149826  106287 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0912 22:01:43.149834  106287 command_runner.go:130] > # 	"image_pulls_success_total",
	I0912 22:01:43.149841  106287 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0912 22:01:43.149845  106287 command_runner.go:130] > # 	"containers_oom_count_total",
	I0912 22:01:43.149851  106287 command_runner.go:130] > # ]
	I0912 22:01:43.149856  106287 command_runner.go:130] > # The port on which the metrics server will listen.
	I0912 22:01:43.149862  106287 command_runner.go:130] > # metrics_port = 9090
	I0912 22:01:43.149868  106287 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0912 22:01:43.149874  106287 command_runner.go:130] > # metrics_socket = ""
	I0912 22:01:43.149879  106287 command_runner.go:130] > # The certificate for the secure metrics server.
	I0912 22:01:43.149888  106287 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0912 22:01:43.149894  106287 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0912 22:01:43.149901  106287 command_runner.go:130] > # certificate on any modification event.
	I0912 22:01:43.149905  106287 command_runner.go:130] > # metrics_cert = ""
	I0912 22:01:43.149912  106287 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0912 22:01:43.149918  106287 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0912 22:01:43.149925  106287 command_runner.go:130] > # metrics_key = ""
	I0912 22:01:43.149930  106287 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0912 22:01:43.149937  106287 command_runner.go:130] > [crio.tracing]
	I0912 22:01:43.149945  106287 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0912 22:01:43.149951  106287 command_runner.go:130] > # enable_tracing = false
	I0912 22:01:43.149957  106287 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0912 22:01:43.149963  106287 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0912 22:01:43.149969  106287 command_runner.go:130] > # Number of samples to collect per million spans.
	I0912 22:01:43.149976  106287 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0912 22:01:43.149982  106287 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0912 22:01:43.149988  106287 command_runner.go:130] > [crio.stats]
	I0912 22:01:43.149994  106287 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0912 22:01:43.150001  106287 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0912 22:01:43.150006  106287 command_runner.go:130] > # stats_collection_period = 0
	I0912 22:01:43.150108  106287 cni.go:84] Creating CNI manager for ""
	I0912 22:01:43.150123  106287 cni.go:136] 2 nodes found, recommending kindnet
	I0912 22:01:43.150132  106287 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 22:01:43.150153  106287 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-947523 NodeName:multinode-947523-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:01:43.150261  106287 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-947523-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:01:43.150306  106287 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=multinode-947523-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-947523 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 22:01:43.150355  106287 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 22:01:43.157673  106287 command_runner.go:130] > kubeadm
	I0912 22:01:43.157693  106287 command_runner.go:130] > kubectl
	I0912 22:01:43.157700  106287 command_runner.go:130] > kubelet
	I0912 22:01:43.158307  106287 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:01:43.158373  106287 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0912 22:01:43.166036  106287 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0912 22:01:43.181519  106287 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:01:43.197037  106287 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0912 22:01:43.199993  106287 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:01:43.209521  106287 host.go:66] Checking if "multinode-947523" exists ...
	I0912 22:01:43.209760  106287 config.go:182] Loaded profile config "multinode-947523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:01:43.209809  106287 start.go:304] JoinCluster: &{Name:multinode-947523 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-947523 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:
9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:01:43.209894  106287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0912 22:01:43.209929  106287 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:01:43.227115  106287 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa Username:docker}
	I0912 22:01:43.375598  106287 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token vc04iv.iyji9j86vd1h3txg --discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 
	I0912 22:01:43.375677  106287 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0912 22:01:43.375721  106287 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vc04iv.iyji9j86vd1h3txg --discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-947523-m02"
	I0912 22:01:43.408876  106287 command_runner.go:130] > [preflight] Running pre-flight checks
	I0912 22:01:43.435195  106287 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0912 22:01:43.435223  106287 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1041-gcp
	I0912 22:01:43.435232  106287 command_runner.go:130] > OS: Linux
	I0912 22:01:43.435240  106287 command_runner.go:130] > CGROUPS_CPU: enabled
	I0912 22:01:43.435251  106287 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0912 22:01:43.435259  106287 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0912 22:01:43.435268  106287 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0912 22:01:43.435277  106287 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0912 22:01:43.435289  106287 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0912 22:01:43.435303  106287 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0912 22:01:43.435314  106287 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0912 22:01:43.435322  106287 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0912 22:01:43.513182  106287 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0912 22:01:43.513204  106287 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0912 22:01:43.538253  106287 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:01:43.538507  106287 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:01:43.538526  106287 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0912 22:01:43.612983  106287 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0912 22:01:45.625786  106287 command_runner.go:130] > This node has joined the cluster:
	I0912 22:01:45.625815  106287 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0912 22:01:45.625825  106287 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0912 22:01:45.625835  106287 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0912 22:01:45.628372  106287 command_runner.go:130] ! W0912 22:01:43.408324    1114 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0912 22:01:45.628404  106287 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1041-gcp\n", err: exit status 1
	I0912 22:01:45.628419  106287 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0912 22:01:45.628443  106287 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token vc04iv.iyji9j86vd1h3txg --discovery-token-ca-cert-hash sha256:92c834105e8f46c1c711c4776cc407b0f7a667810fb8c2450d503b2b71126bf1 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-947523-m02": (2.25270441s)
	I0912 22:01:45.628467  106287 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0912 22:01:45.789805  106287 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0912 22:01:45.789842  106287 start.go:306] JoinCluster complete in 2.580034449s
	I0912 22:01:45.789854  106287 cni.go:84] Creating CNI manager for ""
	I0912 22:01:45.789862  106287 cni.go:136] 2 nodes found, recommending kindnet
	I0912 22:01:45.789919  106287 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 22:01:45.793364  106287 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0912 22:01:45.793387  106287 command_runner.go:130] >   Size: 3955775   	Blocks: 7736       IO Block: 4096   regular file
	I0912 22:01:45.793397  106287 command_runner.go:130] > Device: 36h/54d	Inode: 555970      Links: 1
	I0912 22:01:45.793403  106287 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0912 22:01:45.793410  106287 command_runner.go:130] > Access: 2023-05-09 19:53:47.000000000 +0000
	I0912 22:01:45.793415  106287 command_runner.go:130] > Modify: 2023-05-09 19:53:47.000000000 +0000
	I0912 22:01:45.793421  106287 command_runner.go:130] > Change: 2023-09-12 21:43:43.379872388 +0000
	I0912 22:01:45.793426  106287 command_runner.go:130] >  Birth: 2023-09-12 21:43:43.355870062 +0000
	I0912 22:01:45.793464  106287 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0912 22:01:45.793472  106287 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 22:01:45.809148  106287 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 22:01:46.003256  106287 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0912 22:01:46.006818  106287 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0912 22:01:46.009356  106287 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0912 22:01:46.020460  106287 command_runner.go:130] > daemonset.apps/kindnet configured
	I0912 22:01:46.024494  106287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:01:46.024771  106287 kapi.go:59] client config for multinode-947523: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 22:01:46.025065  106287 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0912 22:01:46.025078  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:46.025086  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:46.025094  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:46.027131  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:46.027147  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:46.027154  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:46.027163  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:46.027178  106287 round_trippers.go:580]     Content-Length: 291
	I0912 22:01:46.027191  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:46 GMT
	I0912 22:01:46.027201  106287 round_trippers.go:580]     Audit-Id: a49bb5af-7113-4389-a096-6f1e5defa44e
	I0912 22:01:46.027213  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:46.027223  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:46.027250  106287 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2cf79ccf-2f5b-44ee-9635-433b3f2b66dd","resourceVersion":"419","creationTimestamp":"2023-09-12T22:01:03Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0912 22:01:46.027381  106287 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2cf79ccf-2f5b-44ee-9635-433b3f2b66dd","resourceVersion":"419","creationTimestamp":"2023-09-12T22:01:03Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0912 22:01:46.027426  106287 round_trippers.go:463] PUT https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0912 22:01:46.027437  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:46.027448  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:46.027460  106287 round_trippers.go:473]     Content-Type: application/json
	I0912 22:01:46.027472  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:46.032972  106287 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0912 22:01:46.032997  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:46.033007  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:46 GMT
	I0912 22:01:46.033014  106287 round_trippers.go:580]     Audit-Id: e6de20e8-99fb-4248-8f91-6cead83b4535
	I0912 22:01:46.033022  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:46.033029  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:46.033040  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:46.033048  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:46.033059  106287 round_trippers.go:580]     Content-Length: 291
	I0912 22:01:46.033090  106287 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2cf79ccf-2f5b-44ee-9635-433b3f2b66dd","resourceVersion":"458","creationTimestamp":"2023-09-12T22:01:03Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0912 22:01:46.033247  106287 round_trippers.go:463] GET https://192.168.58.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0912 22:01:46.033259  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:46.033269  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:46.033274  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:46.035151  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:46.035173  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:46.035183  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:46.035191  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:46.035204  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:46.035212  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:46.035219  106287 round_trippers.go:580]     Content-Length: 291
	I0912 22:01:46.035227  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:46 GMT
	I0912 22:01:46.035234  106287 round_trippers.go:580]     Audit-Id: 83987e1f-ffa9-4d2a-8392-b2be228eeb03
	I0912 22:01:46.035262  106287 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2cf79ccf-2f5b-44ee-9635-433b3f2b66dd","resourceVersion":"458","creationTimestamp":"2023-09-12T22:01:03Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0912 22:01:46.035353  106287 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-947523" context rescaled to 1 replicas
	I0912 22:01:46.035386  106287 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0912 22:01:46.037103  106287 out.go:177] * Verifying Kubernetes components...
	I0912 22:01:46.038634  106287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:01:46.053424  106287 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:01:46.053770  106287 kapi.go:59] client config for multinode-947523: &rest.Config{Host:"https://192.168.58.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/multinode-947523/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 22:01:46.054182  106287 node_ready.go:35] waiting up to 6m0s for node "multinode-947523-m02" to be "Ready" ...
	I0912 22:01:46.054272  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:46.054283  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:46.054293  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:46.054305  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:46.057504  106287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 22:01:46.057534  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:46.057545  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:46.057555  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:46.057564  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:46.057578  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:46 GMT
	I0912 22:01:46.057586  106287 round_trippers.go:580]     Audit-Id: a9e0d859-5a43-4123-bbc2-b74ef5befa6e
	I0912 22:01:46.057594  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:46.057786  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"457","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0912 22:01:46.058208  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:46.058222  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:46.058233  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:46.058242  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:46.062075  106287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 22:01:46.062095  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:46.062105  106287 round_trippers.go:580]     Audit-Id: b1f31a7c-f68b-4fe2-aa83-2a947c832d1a
	I0912 22:01:46.062112  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:46.062119  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:46.062126  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:46.062133  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:46.062145  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:46 GMT
	I0912 22:01:46.062279  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"457","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 5101 chars]
	I0912 22:01:46.563474  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:46.563495  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:46.563502  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:46.563509  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:46.565740  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:46.565758  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:46.565765  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:46.565770  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:46.565775  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:46 GMT
	I0912 22:01:46.565781  106287 round_trippers.go:580]     Audit-Id: 8d6b06b2-39aa-434d-baba-2b674b11e7e0
	I0912 22:01:46.565787  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:46.565794  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:46.565926  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"472","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0912 22:01:47.063587  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:47.063605  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:47.063613  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:47.063619  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:47.065990  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:47.066007  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:47.066014  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:47.066019  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:47 GMT
	I0912 22:01:47.066025  106287 round_trippers.go:580]     Audit-Id: 2f38dc22-c193-4362-a4a5-d7af8034d923
	I0912 22:01:47.066030  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:47.066035  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:47.066040  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:47.066209  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"472","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0912 22:01:47.563744  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:47.563766  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:47.563773  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:47.563779  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:47.566135  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:47.566165  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:47.566176  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:47.566184  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:47.566191  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:47.566196  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:47.566201  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:47 GMT
	I0912 22:01:47.566206  106287 round_trippers.go:580]     Audit-Id: e6adc965-0cea-4f68-a299-f683e41ad182
	I0912 22:01:47.566336  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"472","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0912 22:01:48.062996  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:48.063020  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:48.063028  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:48.063033  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:48.065311  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:48.065334  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:48.065343  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:48.065350  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:48.065364  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:48.065376  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:48.065383  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:48 GMT
	I0912 22:01:48.065394  106287 round_trippers.go:580]     Audit-Id: ef6be701-86d3-4f4f-b05b-ecbdeeaf47b1
	I0912 22:01:48.065541  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"472","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0912 22:01:48.065843  106287 node_ready.go:58] node "multinode-947523-m02" has status "Ready":"False"
	I0912 22:01:48.563111  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:48.563132  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:48.563140  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:48.563146  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:48.565632  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:48.565669  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:48.565679  106287 round_trippers.go:580]     Audit-Id: 713cce7a-fd2b-44bc-9083-ab1a91829dee
	I0912 22:01:48.565689  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:48.565697  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:48.565704  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:48.565712  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:48.565725  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:48 GMT
	I0912 22:01:48.565837  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"472","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0912 22:01:49.063443  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:49.063464  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:49.063472  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:49.063478  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:49.065846  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:49.065869  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:49.065880  106287 round_trippers.go:580]     Audit-Id: 98ad1bf7-c739-4016-9213-8b38da61d91f
	I0912 22:01:49.065889  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:49.065898  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:49.065912  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:49.065921  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:49.065930  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:49 GMT
	I0912 22:01:49.066052  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"472","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0912 22:01:49.563557  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:49.563583  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:49.563591  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:49.563597  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:49.565936  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:49.565960  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:49.565970  106287 round_trippers.go:580]     Audit-Id: 15555988-2cee-4c01-8d12-a904d45deda5
	I0912 22:01:49.565977  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:49.565984  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:49.565994  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:49.566006  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:49.566014  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:49 GMT
	I0912 22:01:49.566152  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"472","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0912 22:01:50.062780  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:50.062811  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.062818  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.062824  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.064972  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:50.064996  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.065006  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.065015  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.065024  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.065036  106287 round_trippers.go:580]     Audit-Id: 1decc580-1aeb-4076-bc02-5185114d917f
	I0912 22:01:50.065046  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.065061  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.065168  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"472","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5210 chars]
	I0912 22:01:50.563784  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:50.563805  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.563813  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.563819  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.566104  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:50.566128  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.566138  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.566152  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.566160  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.566167  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.566175  106287 round_trippers.go:580]     Audit-Id: 23d7d58c-090e-47bf-8e52-f76238f8153c
	I0912 22:01:50.566183  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.566288  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"492","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5296 chars]
	I0912 22:01:50.566579  106287 node_ready.go:49] node "multinode-947523-m02" has status "Ready":"True"
	I0912 22:01:50.566594  106287 node_ready.go:38] duration metric: took 4.512389688s waiting for node "multinode-947523-m02" to be "Ready" ...
	I0912 22:01:50.566602  106287 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:01:50.566650  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods
	I0912 22:01:50.566658  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.566664  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.566674  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.569904  106287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 22:01:50.569927  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.569936  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.569943  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.569952  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.569961  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.569974  106287 round_trippers.go:580]     Audit-Id: 2361c426-b4e0-4a57-be9c-f5a96ce40d66
	I0912 22:01:50.569983  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.570528  106287 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"492"},"items":[{"metadata":{"name":"coredns-5dd5756b68-6q54t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43187880-a314-47f3-b42a-608882b6043b","resourceVersion":"400","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 76303 chars]
	I0912 22:01:50.572831  106287 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6q54t" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.572910  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-6q54t
	I0912 22:01:50.572919  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.572926  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.572932  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.574780  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:50.574794  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.574801  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.574806  106287 round_trippers.go:580]     Audit-Id: 7d706365-1115-4ca8-9afb-d52bcaf9a233
	I0912 22:01:50.574812  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.574817  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.574822  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.574827  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.574966  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-6q54t","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"43187880-a314-47f3-b42a-608882b6043b","resourceVersion":"400","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6263 chars]
	I0912 22:01:50.575363  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:50.575374  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.575381  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.575387  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.577043  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:50.577063  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.577074  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.577083  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.577089  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.577094  106287 round_trippers.go:580]     Audit-Id: c03df9b5-301e-48cf-aab2-a2040c896805
	I0912 22:01:50.577099  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.577111  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.577262  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"425","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6244 chars]
	I0912 22:01:50.577547  106287 pod_ready.go:92] pod "coredns-5dd5756b68-6q54t" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:50.577559  106287 pod_ready.go:81] duration metric: took 4.709892ms waiting for pod "coredns-5dd5756b68-6q54t" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.577567  106287 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.577611  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-m8mcv
	I0912 22:01:50.577618  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.577625  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.577631  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.579267  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:50.579289  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.579296  106287 round_trippers.go:580]     Audit-Id: 18f43f0d-5fa2-4ca5-8d0a-b5f488abe314
	I0912 22:01:50.579302  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.579307  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.579312  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.579318  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.579325  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.579459  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-m8mcv","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"a925e809-5cce-4008-870d-3de1b67bbe83","resourceVersion":"461","creationTimestamp":"2023-09-12T22:01:16Z","deletionTimestamp":"2023-09-12T22:02:16Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"c88d587e-aa12-4929-92d7-cf697ea73a61","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c88d587e-aa12-4929-92d7-cf697ea73a61\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6338 chars]
	I0912 22:01:50.579850  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:50.579861  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.579868  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.579874  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.581455  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:50.581476  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.581486  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.581494  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.581510  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.581521  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.581533  106287 round_trippers.go:580]     Audit-Id: 931ca22c-db32-4d57-9624-6a8a09d3644d
	I0912 22:01:50.581543  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.581634  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"425","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6244 chars]
	I0912 22:01:50.581900  106287 pod_ready.go:92] pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:50.581911  106287 pod_ready.go:81] duration metric: took 4.339213ms waiting for pod "coredns-5dd5756b68-m8mcv" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.581919  106287 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.581975  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-947523
	I0912 22:01:50.581984  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.581991  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.581996  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.583510  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:50.583524  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.583530  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.583536  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.583541  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.583546  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.583553  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.583559  106287 round_trippers.go:580]     Audit-Id: 4ef0e13a-e9dd-477f-a719-d693ac21afc8
	I0912 22:01:50.583661  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-947523","namespace":"kube-system","uid":"f4d30e28-adde-4a67-9b29-0029ad5d3239","resourceVersion":"328","creationTimestamp":"2023-09-12T22:01:04Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"86a8a258e4a4aba5f7a124f3591cc4df","kubernetes.io/config.mirror":"86a8a258e4a4aba5f7a124f3591cc4df","kubernetes.io/config.seen":"2023-09-12T22:01:03.833973700Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5833 chars]
	I0912 22:01:50.583988  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:50.583998  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.584005  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.584011  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.585549  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:50.585563  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.585570  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.585575  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.585581  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.585586  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.585591  106287 round_trippers.go:580]     Audit-Id: ef59b8b1-6335-4d46-ae94-44358e093d37
	I0912 22:01:50.585596  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.585727  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"425","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6244 chars]
	I0912 22:01:50.585991  106287 pod_ready.go:92] pod "etcd-multinode-947523" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:50.586002  106287 pod_ready.go:81] duration metric: took 4.074789ms waiting for pod "etcd-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.586016  106287 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.586055  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-947523
	I0912 22:01:50.586063  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.586069  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.586081  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.587580  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:50.587594  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.587603  106287 round_trippers.go:580]     Audit-Id: f2744ce3-e64c-4aa0-8f76-0e50405add3c
	I0912 22:01:50.587610  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.587617  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.587626  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.587636  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.587648  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.587802  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-947523","namespace":"kube-system","uid":"06229ad2-51aa-408c-9fba-049fdaa4cf47","resourceVersion":"285","creationTimestamp":"2023-09-12T22:01:02Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"0d9718bc3e967e6627626ff1f6f24854","kubernetes.io/config.mirror":"0d9718bc3e967e6627626ff1f6f24854","kubernetes.io/config.seen":"2023-09-12T22:00:57.710584734Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:02Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8219 chars]
	I0912 22:01:50.588174  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:50.588192  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.588203  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.588212  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.589743  106287 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0912 22:01:50.589760  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.589769  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.589777  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.589788  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.589804  106287 round_trippers.go:580]     Audit-Id: 66118bfd-4e62-40ac-b4f1-9b90137ce511
	I0912 22:01:50.589813  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.589826  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.589916  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"425","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6244 chars]
	I0912 22:01:50.590182  106287 pod_ready.go:92] pod "kube-apiserver-multinode-947523" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:50.590200  106287 pod_ready.go:81] duration metric: took 4.177401ms waiting for pod "kube-apiserver-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.590208  106287 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.764677  106287 request.go:629] Waited for 174.389358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-947523
	I0912 22:01:50.764733  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-947523
	I0912 22:01:50.764738  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.764746  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.764752  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.767283  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:50.767306  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.767314  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.767320  106287 round_trippers.go:580]     Audit-Id: f2208799-1c30-46d5-be99-d891ce259a1c
	I0912 22:01:50.767325  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.767331  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.767339  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.767345  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.767473  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-947523","namespace":"kube-system","uid":"342d1648-c610-467f-91d4-f47bb5c83634","resourceVersion":"288","creationTimestamp":"2023-09-12T22:01:04Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee87f8fd4f4e6399c1c60570a26046b4","kubernetes.io/config.mirror":"ee87f8fd4f4e6399c1c60570a26046b4","kubernetes.io/config.seen":"2023-09-12T22:01:03.833980405Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7794 chars]
	I0912 22:01:50.964256  106287 request.go:629] Waited for 196.360951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:50.964335  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:50.964344  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:50.964356  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:50.964374  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:50.966592  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:50.966612  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:50.966619  106287 round_trippers.go:580]     Audit-Id: 8a0b0cf1-1fd2-43f9-be8d-524dd4974ec3
	I0912 22:01:50.966624  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:50.966636  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:50.966644  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:50.966656  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:50.966663  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:50 GMT
	I0912 22:01:50.966788  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"425","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6244 chars]
	I0912 22:01:50.967110  106287 pod_ready.go:92] pod "kube-controller-manager-multinode-947523" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:50.967126  106287 pod_ready.go:81] duration metric: took 376.912293ms waiting for pod "kube-controller-manager-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:50.967136  106287 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-n6dqw" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:51.164577  106287 request.go:629] Waited for 197.3568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6dqw
	I0912 22:01:51.164653  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-n6dqw
	I0912 22:01:51.164661  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:51.164672  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:51.164683  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:51.168299  106287 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0912 22:01:51.168333  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:51.168343  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:51.168351  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:51.168358  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:51 GMT
	I0912 22:01:51.168365  106287 round_trippers.go:580]     Audit-Id: 8cdbc6e2-0e0d-435e-be97-90b12dae6f96
	I0912 22:01:51.168372  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:51.168386  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:51.168497  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-n6dqw","generateName":"kube-proxy-","namespace":"kube-system","uid":"06ecc375-04a2-4390-97d9-279187e5cccb","resourceVersion":"481","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d3a4320-e6cf-4430-9f20-cd5151fa4503","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d3a4320-e6cf-4430-9f20-cd5151fa4503\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0912 22:01:51.364404  106287 request.go:629] Waited for 195.348911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:51.364459  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523-m02
	I0912 22:01:51.364465  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:51.364475  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:51.364490  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:51.367162  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:51.367197  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:51.367208  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:51.367218  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:51 GMT
	I0912 22:01:51.367232  106287 round_trippers.go:580]     Audit-Id: 53cf84a6-07be-4cbc-a322-32bf72f1b6d4
	I0912 22:01:51.367238  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:51.367245  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:51.367256  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:51.367388  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523-m02","uid":"4b81e65e-7991-42ca-af19-7d610c5bda1e","resourceVersion":"495","creationTimestamp":"2023-09-12T22:01:45Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:45Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 5176 chars]
	I0912 22:01:51.367678  106287 pod_ready.go:92] pod "kube-proxy-n6dqw" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:51.367691  106287 pod_ready.go:81] duration metric: took 400.543888ms waiting for pod "kube-proxy-n6dqw" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:51.367700  106287 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p2j8w" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:51.564111  106287 request.go:629] Waited for 196.347964ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2j8w
	I0912 22:01:51.564182  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p2j8w
	I0912 22:01:51.564189  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:51.564202  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:51.564217  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:51.566610  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:51.566630  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:51.566637  106287 round_trippers.go:580]     Audit-Id: d845006d-3718-44bd-9a89-e2cbab5efdb1
	I0912 22:01:51.566643  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:51.566648  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:51.566655  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:51.566664  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:51.566677  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:51 GMT
	I0912 22:01:51.566784  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-p2j8w","generateName":"kube-proxy-","namespace":"kube-system","uid":"cc0d0912-c416-4d26-9520-8e414702468f","resourceVersion":"369","creationTimestamp":"2023-09-12T22:01:16Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2d3a4320-e6cf-4430-9f20-cd5151fa4503","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2d3a4320-e6cf-4430-9f20-cd5151fa4503\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5510 chars]
	I0912 22:01:51.764665  106287 request.go:629] Waited for 197.39325ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:51.764720  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:51.764725  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:51.764733  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:51.764739  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:51.767142  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:51.767162  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:51.767169  106287 round_trippers.go:580]     Audit-Id: 3bed267f-8c7d-40fb-a549-312623dbf781
	I0912 22:01:51.767175  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:51.767181  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:51.767188  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:51.767196  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:51.767204  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:51 GMT
	I0912 22:01:51.767331  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"425","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6244 chars]
	I0912 22:01:51.767646  106287 pod_ready.go:92] pod "kube-proxy-p2j8w" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:51.767661  106287 pod_ready.go:81] duration metric: took 399.952847ms waiting for pod "kube-proxy-p2j8w" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:51.767676  106287 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:51.964135  106287 request.go:629] Waited for 196.377729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-947523
	I0912 22:01:51.964201  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-947523
	I0912 22:01:51.964207  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:51.964215  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:51.964221  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:51.966778  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:51.966799  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:51.966807  106287 round_trippers.go:580]     Audit-Id: 279d0f08-c584-4454-ba4a-2ab7091eb660
	I0912 22:01:51.966812  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:51.966817  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:51.966822  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:51.966829  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:51.966837  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:51 GMT
	I0912 22:01:51.966928  106287 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-947523","namespace":"kube-system","uid":"0c533c57-b3e2-461b-ab69-fc5253dc6074","resourceVersion":"289","creationTimestamp":"2023-09-12T22:01:04Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"5670a9664f0c7c79145baba26de8ea87","kubernetes.io/config.mirror":"5670a9664f0c7c79145baba26de8ea87","kubernetes.io/config.seen":"2023-09-12T22:01:03.833981869Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-12T22:01:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4676 chars]
	I0912 22:01:52.164694  106287 request.go:629] Waited for 197.381333ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:52.164747  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes/multinode-947523
	I0912 22:01:52.164752  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:52.164759  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:52.164766  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:52.167167  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:52.167195  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:52.167205  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:52.167215  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:52 GMT
	I0912 22:01:52.167223  106287 round_trippers.go:580]     Audit-Id: 809d3782-8644-4ade-8305-f82b9df12449
	I0912 22:01:52.167230  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:52.167239  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:52.167246  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:52.167366  106287 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"425","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-12T22:01:00Z","fieldsType":"FieldsV1","fiel [truncated 6244 chars]
	I0912 22:01:52.167709  106287 pod_ready.go:92] pod "kube-scheduler-multinode-947523" in "kube-system" namespace has status "Ready":"True"
	I0912 22:01:52.167726  106287 pod_ready.go:81] duration metric: took 400.0394ms waiting for pod "kube-scheduler-multinode-947523" in "kube-system" namespace to be "Ready" ...
	I0912 22:01:52.167739  106287 pod_ready.go:38] duration metric: took 1.601129412s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:01:52.167757  106287 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 22:01:52.167800  106287 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:01:52.178369  106287 system_svc.go:56] duration metric: took 10.601263ms WaitForService to wait for kubelet.
	I0912 22:01:52.178392  106287 kubeadm.go:581] duration metric: took 6.142978278s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 22:01:52.178418  106287 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:01:52.364848  106287 request.go:629] Waited for 186.350358ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.58.2:8443/api/v1/nodes
	I0912 22:01:52.364929  106287 round_trippers.go:463] GET https://192.168.58.2:8443/api/v1/nodes
	I0912 22:01:52.364941  106287 round_trippers.go:469] Request Headers:
	I0912 22:01:52.364954  106287 round_trippers.go:473]     Accept: application/json, */*
	I0912 22:01:52.364967  106287 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0912 22:01:52.367346  106287 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0912 22:01:52.367365  106287 round_trippers.go:577] Response Headers:
	I0912 22:01:52.367372  106287 round_trippers.go:580]     Audit-Id: 9cfa81bd-9eb5-4b93-bbae-9b49414a2228
	I0912 22:01:52.367377  106287 round_trippers.go:580]     Cache-Control: no-cache, private
	I0912 22:01:52.367382  106287 round_trippers.go:580]     Content-Type: application/json
	I0912 22:01:52.367388  106287 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 46812448-8fb9-4907-8725-a71e11da7c7a
	I0912 22:01:52.367396  106287 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: f1ba6956-391a-4b13-a9cb-a187a6771f32
	I0912 22:01:52.367406  106287 round_trippers.go:580]     Date: Tue, 12 Sep 2023 22:01:52 GMT
	I0912 22:01:52.367557  106287 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"501"},"items":[{"metadata":{"name":"multinode-947523","uid":"a90ff295-9598-41b5-8d07-0f8a1fa3629d","resourceVersion":"425","creationTimestamp":"2023-09-12T22:01:00Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-947523","kubernetes.io/os":"linux","minikube.k8s.io/commit":"45f04e6c33f17ea86560d581e35f03eca0c584e1","minikube.k8s.io/name":"multinode-947523","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_12T22_01_04_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 12465 chars]
	I0912 22:01:52.368206  106287 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:01:52.368226  106287 node_conditions.go:123] node cpu capacity is 8
	I0912 22:01:52.368238  106287 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:01:52.368248  106287 node_conditions.go:123] node cpu capacity is 8
	I0912 22:01:52.368253  106287 node_conditions.go:105] duration metric: took 189.829803ms to run NodePressure ...
	I0912 22:01:52.368265  106287 start.go:228] waiting for startup goroutines ...
	I0912 22:01:52.368298  106287 start.go:242] writing updated cluster config ...
	I0912 22:01:52.368732  106287 ssh_runner.go:195] Run: rm -f paused
	I0912 22:01:52.414342  106287 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0912 22:01:52.417031  106287 out.go:177] * Done! kubectl is now configured to use "multinode-947523" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 12 22:01:51 multinode-947523 crio[957]: time="2023-09-12 22:01:51.115323564Z" level=info msg="Got pod network &{Name:coredns-5dd5756b68-m8mcv Namespace:kube-system ID:817c0396c5fe1146594250c812c9534f569adf2e24662556f16a94fd1ec56a70 UID:a925e809-5cce-4008-870d-3de1b67bbe83 NetNS:/var/run/netns/ab5f98eb-40da-4878-9a31-c4146da87512 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 12 22:01:51 multinode-947523 crio[957]: time="2023-09-12 22:01:51.115457867Z" level=info msg="Deleting pod kube-system_coredns-5dd5756b68-m8mcv from CNI network \"kindnet\" (type=ptp)"
	Sep 12 22:01:51 multinode-947523 crio[957]: time="2023-09-12 22:01:51.150220172Z" level=info msg="Stopped pod sandbox: 817c0396c5fe1146594250c812c9534f569adf2e24662556f16a94fd1ec56a70" id=d3a9b06b-6e1f-4a17-91e6-7af8c8d70db5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 12 22:01:52 multinode-947523 crio[957]: time="2023-09-12 22:01:52.025310327Z" level=info msg="Removing container: 126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73" id=ff5eae45-997a-4fb6-b6b7-3cdf069c90c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 12 22:01:52 multinode-947523 crio[957]: time="2023-09-12 22:01:52.040471144Z" level=info msg="Removed container 126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73: kube-system/coredns-5dd5756b68-m8mcv/coredns" id=ff5eae45-997a-4fb6-b6b7-3cdf069c90c6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.387332497Z" level=info msg="Running pod sandbox: default/busybox-5bc68d56bd-4qwb4/POD" id=94ef5519-610c-4494-b67d-2a365628af29 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.387390413Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.401038288Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-4qwb4 Namespace:default ID:45f74496442db4a64a60d510fda86fc8216a25c2c2304a5cd00d44ca447c6496 UID:7f3a88ac-9409-4215-9b10-bbbafb2b9654 NetNS:/var/run/netns/535784ae-88b6-4159-8f1d-c75abd384e23 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.401071448Z" level=info msg="Adding pod default_busybox-5bc68d56bd-4qwb4 to CNI network \"kindnet\" (type=ptp)"
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.410083126Z" level=info msg="Got pod network &{Name:busybox-5bc68d56bd-4qwb4 Namespace:default ID:45f74496442db4a64a60d510fda86fc8216a25c2c2304a5cd00d44ca447c6496 UID:7f3a88ac-9409-4215-9b10-bbbafb2b9654 NetNS:/var/run/netns/535784ae-88b6-4159-8f1d-c75abd384e23 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.410227449Z" level=info msg="Checking pod default_busybox-5bc68d56bd-4qwb4 for CNI network kindnet (type=ptp)"
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.423831600Z" level=info msg="Ran pod sandbox 45f74496442db4a64a60d510fda86fc8216a25c2c2304a5cd00d44ca447c6496 with infra container: default/busybox-5bc68d56bd-4qwb4/POD" id=94ef5519-610c-4494-b67d-2a365628af29 name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.424973424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=42732e42-12bf-44cd-850b-1e02e4c8c30e name=/runtime.v1.ImageService/ImageStatus
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.425215360Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=42732e42-12bf-44cd-850b-1e02e4c8c30e name=/runtime.v1.ImageService/ImageStatus
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.425713291Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=f5033178-fe40-4307-a350-ddeaae4e52dd name=/runtime.v1.ImageService/PullImage
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.431140048Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 12 22:01:53 multinode-947523 crio[957]: time="2023-09-12 22:01:53.662060874Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 12 22:01:54 multinode-947523 crio[957]: time="2023-09-12 22:01:54.141097298Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=f5033178-fe40-4307-a350-ddeaae4e52dd name=/runtime.v1.ImageService/PullImage
	Sep 12 22:01:54 multinode-947523 crio[957]: time="2023-09-12 22:01:54.141928563Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=6f0d7d1a-0637-4852-8cc3-a5c756633a7b name=/runtime.v1.ImageService/ImageStatus
	Sep 12 22:01:54 multinode-947523 crio[957]: time="2023-09-12 22:01:54.142490562Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6f0d7d1a-0637-4852-8cc3-a5c756633a7b name=/runtime.v1.ImageService/ImageStatus
	Sep 12 22:01:54 multinode-947523 crio[957]: time="2023-09-12 22:01:54.143178821Z" level=info msg="Creating container: default/busybox-5bc68d56bd-4qwb4/busybox" id=9ad39c40-2790-4504-890d-192f549a5698 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:01:54 multinode-947523 crio[957]: time="2023-09-12 22:01:54.143250978Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 12 22:01:54 multinode-947523 crio[957]: time="2023-09-12 22:01:54.224618448Z" level=info msg="Created container c2a0cabc8754009db1c3d623b42875eeb8cfdf72ecb691885eedf881283a6b92: default/busybox-5bc68d56bd-4qwb4/busybox" id=9ad39c40-2790-4504-890d-192f549a5698 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:01:54 multinode-947523 crio[957]: time="2023-09-12 22:01:54.225179337Z" level=info msg="Starting container: c2a0cabc8754009db1c3d623b42875eeb8cfdf72ecb691885eedf881283a6b92" id=b898f4fe-4294-4a19-983a-d8357db53101 name=/runtime.v1.RuntimeService/StartContainer
	Sep 12 22:01:54 multinode-947523 crio[957]: time="2023-09-12 22:01:54.232937690Z" level=info msg="Started container" PID=2678 containerID=c2a0cabc8754009db1c3d623b42875eeb8cfdf72ecb691885eedf881283a6b92 description=default/busybox-5bc68d56bd-4qwb4/busybox id=b898f4fe-4294-4a19-983a-d8357db53101 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45f74496442db4a64a60d510fda86fc8216a25c2c2304a5cd00d44ca447c6496
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c2a0cabc87540       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   45f74496442db       busybox-5bc68d56bd-4qwb4
	989b9d4a90b55       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      38 seconds ago       Running             storage-provisioner       0                   1dcca89e4d75b       storage-provisioner
	bd5d39592d871       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      38 seconds ago       Running             coredns                   0                   644f569f334b4       coredns-5dd5756b68-6q54t
	bc10545bd4e4a       docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052    39 seconds ago       Running             kindnet-cni               0                   620303d6f6a6c       kindnet-947mb
	ff0df5800a74f       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      41 seconds ago       Running             kube-proxy                0                   d3058eac2c5b1       kube-proxy-p2j8w
	1eb6c2929f45f       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      59 seconds ago       Running             kube-controller-manager   0                   836078d9c514c       kube-controller-manager-multinode-947523
	e3d43ff6f2ea8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   3bf03909cc734       etcd-multinode-947523
	736dccdd44581       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      About a minute ago   Running             kube-apiserver            0                   da763b50248ce       kube-apiserver-multinode-947523
	68b43a27c7cbb       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      About a minute ago   Running             kube-scheduler            0                   cb8271853c00d       kube-scheduler-multinode-947523
	
	* 
	* ==> coredns [bd5d39592d871eb588e3074f1006038c34fad3ee495028e7a8160ea84b888542] <==
	* [INFO] 10.244.1.2:48135 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000092842s
	[INFO] 10.244.0.4:57696 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000109101s
	[INFO] 10.244.0.4:41485 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001311192s
	[INFO] 10.244.0.4:51285 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000058809s
	[INFO] 10.244.0.4:41947 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000075407s
	[INFO] 10.244.0.4:42794 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.000943426s
	[INFO] 10.244.0.4:45839 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000038404s
	[INFO] 10.244.0.4:40622 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000051991s
	[INFO] 10.244.0.4:33030 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000034153s
	[INFO] 10.244.1.2:33212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127264s
	[INFO] 10.244.1.2:36249 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163698s
	[INFO] 10.244.1.2:47713 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110255s
	[INFO] 10.244.1.2:36830 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091917s
	[INFO] 10.244.0.4:58186 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111522s
	[INFO] 10.244.0.4:59300 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000065786s
	[INFO] 10.244.0.4:47215 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000053126s
	[INFO] 10.244.0.4:58937 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000054309s
	[INFO] 10.244.1.2:50072 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136219s
	[INFO] 10.244.1.2:33593 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000131516s
	[INFO] 10.244.1.2:46769 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000103416s
	[INFO] 10.244.1.2:41665 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078991s
	[INFO] 10.244.0.4:34044 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113085s
	[INFO] 10.244.0.4:56444 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000069458s
	[INFO] 10.244.0.4:59267 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000060298s
	[INFO] 10.244.0.4:53324 - 5 "PTR IN 1.58.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000050681s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-947523
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-947523
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1
	                    minikube.k8s.io/name=multinode-947523
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T22_01_04_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 22:01:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-947523
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 22:01:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 22:01:34 +0000   Tue, 12 Sep 2023 22:00:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 22:01:34 +0000   Tue, 12 Sep 2023 22:00:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 22:01:34 +0000   Tue, 12 Sep 2023 22:00:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 22:01:34 +0000   Tue, 12 Sep 2023 22:01:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-947523
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 89a97109cc9143f8a887a07b439d8849
	  System UUID:                5318d50e-8752-4746-9906-7b3de2ec68ff
	  Boot ID:                    ba5f5c49-ab96-46a2-94a7-f55592fcb8c1
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4qwb4                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 coredns-5dd5756b68-6q54t                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     42s
	  kube-system                 etcd-multinode-947523                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         54s
	  kube-system                 kindnet-947mb                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      42s
	  kube-system                 kube-apiserver-multinode-947523             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-controller-manager-multinode-947523    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 kube-proxy-p2j8w                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-scheduler-multinode-947523             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 40s                kube-proxy       
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x2 over 61s)  kubelet          Node multinode-947523 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x2 over 61s)  kubelet          Node multinode-947523 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x2 over 61s)  kubelet          Node multinode-947523 status is now: NodeHasSufficientPID
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s                kubelet          Node multinode-947523 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s                kubelet          Node multinode-947523 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s                kubelet          Node multinode-947523 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node multinode-947523 event: Registered Node multinode-947523 in Controller
	  Normal  NodeReady                39s                kubelet          Node multinode-947523 status is now: NodeReady
	
	
	Name:               multinode-947523-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-947523-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 22:01:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-947523-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 22:01:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 22:01:50 +0000   Tue, 12 Sep 2023 22:01:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 22:01:50 +0000   Tue, 12 Sep 2023 22:01:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 22:01:50 +0000   Tue, 12 Sep 2023 22:01:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 22:01:50 +0000   Tue, 12 Sep 2023 22:01:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-947523-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 4cc01695237e4f629e71795264374dd2
	  System UUID:                3f1e9141-dd96-4c18-b388-ba47b17cb1f0
	  Boot ID:                    ba5f5c49-ab96-46a2-94a7-f55592fcb8c1
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-2lnnj    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5s
	  kube-system                 kindnet-29mnh               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      13s
	  kube-system                 kube-proxy-n6dqw            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10s                kube-proxy       
	  Normal  NodeHasSufficientMemory  13s (x5 over 15s)  kubelet          Node multinode-947523-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x5 over 15s)  kubelet          Node multinode-947523-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x5 over 15s)  kubelet          Node multinode-947523-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                node-controller  Node multinode-947523-m02 event: Registered Node multinode-947523-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-947523-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [  +0.004922] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006591] FS-Cache: N-cookie d=00000000a5c12aec{9p.inode} n=00000000085c872c
	[  +0.007353] FS-Cache: N-key=[8] '7ca00f0200000000'
	[  +0.418767] FS-Cache: Duplicate cookie detected
	[  +0.004695] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006744] FS-Cache: O-cookie d=00000000a5c12aec{9p.inode} n=000000000cd7d0be
	[  +0.007348] FS-Cache: O-key=[8] '83a00f0200000000'
	[  +0.005008] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.007975] FS-Cache: N-cookie d=00000000a5c12aec{9p.inode} n=000000000493d8bd
	[  +0.008751] FS-Cache: N-key=[8] '83a00f0200000000'
	[ +18.292989] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep12 21:53] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[  +1.004063] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000030] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[  +2.015757] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[  +4.191565] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[  +8.191236] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[Sep12 21:54] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[ +32.764792] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	
	* 
	* ==> etcd [e3d43ff6f2ea8896d20a1362ae4481202decc908d837e7fd8daa1f5a5cfa015a] <==
	* {"level":"info","ts":"2023-09-12T22:00:58.521081Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T22:00:58.521138Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-12T22:00:58.521042Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-12T22:00:58.521278Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-09-12T22:00:58.521602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2023-09-12T22:00:58.521748Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2023-09-12T22:00:58.550686Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-12T22:00:58.550751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-12T22:00:58.550769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-09-12T22:00:58.550783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-09-12T22:00:58.550791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-12T22:00:58.550808Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-09-12T22:00:58.550818Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-09-12T22:00:58.551493Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T22:00:58.552071Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:00:58.552064Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-947523 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T22:00:58.552126Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:00:58.552404Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T22:00:58.552568Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T22:00:58.55262Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-12T22:00:58.552643Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T22:00:58.552674Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T22:00:58.554591Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-09-12T22:00:58.555316Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T22:01:36.092653Z","caller":"traceutil/trace.go:171","msg":"trace[1490775957] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"174.000314ms","start":"2023-09-12T22:01:35.918629Z","end":"2023-09-12T22:01:36.092629Z","steps":["trace[1490775957] 'process raft request'  (duration: 173.848576ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  22:01:58 up  1:44,  0 users,  load average: 1.63, 1.23, 0.80
	Linux multinode-947523 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [bc10545bd4e4a76f82a14bb00239e5b47fd2a245bec9b83f069a3993108b709d] <==
	* I0912 22:01:18.523475       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0912 22:01:18.523532       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0912 22:01:18.523712       1 main.go:116] setting mtu 1500 for CNI 
	I0912 22:01:18.523726       1 main.go:146] kindnetd IP family: "ipv4"
	I0912 22:01:18.523751       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0912 22:01:18.826759       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0912 22:01:18.826791       1 main.go:227] handling current node
	I0912 22:01:28.935659       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0912 22:01:28.935685       1 main.go:227] handling current node
	I0912 22:01:38.942887       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0912 22:01:38.942914       1 main.go:227] handling current node
	I0912 22:01:48.947700       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0912 22:01:48.947726       1 main.go:227] handling current node
	I0912 22:01:48.947736       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0912 22:01:48.947740       1 main.go:250] Node multinode-947523-m02 has CIDR [10.244.1.0/24] 
	I0912 22:01:48.947903       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [736dccdd44581f1e0181e5f68695d2a1c2e64713c3a67a5e57f1f67405629d32] <==
	* I0912 22:01:00.887028       1 aggregator.go:166] initial CRD sync complete...
	I0912 22:01:00.887046       1 autoregister_controller.go:141] Starting autoregister controller
	I0912 22:01:00.887054       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 22:01:00.887063       1 cache.go:39] Caches are synced for autoregister controller
	I0912 22:01:00.887243       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 22:01:00.887283       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0912 22:01:00.888812       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0912 22:01:00.889895       1 controller.go:624] quota admission added evaluator for: namespaces
	E0912 22:01:00.929121       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0912 22:01:01.134202       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 22:01:01.692342       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0912 22:01:01.695872       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0912 22:01:01.695894       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 22:01:02.063382       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 22:01:02.096811       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0912 22:01:02.228639       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0912 22:01:02.233893       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0912 22:01:02.234809       1 controller.go:624] quota admission added evaluator for: endpoints
	I0912 22:01:02.238552       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0912 22:01:02.836202       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0912 22:01:03.779474       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0912 22:01:03.788631       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0912 22:01:03.799291       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0912 22:01:16.115566       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0912 22:01:16.145689       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [1eb6c2929f45f02a0d2e062566ece8f02b0b4b6ec853fec6f309fb156e066769] <==
	* I0912 22:01:45.427815       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-947523-m02" podCIDRs=["10.244.1.0/24"]
	I0912 22:01:46.040773       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0912 22:01:46.047642       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-m8mcv"
	I0912 22:01:46.055096       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.499557ms"
	I0912 22:01:46.062205       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.046026ms"
	I0912 22:01:46.062354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.556µs"
	I0912 22:01:46.226927       1 event.go:307] "Event occurred" object="multinode-947523-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-947523-m02 event: Registered Node multinode-947523-m02 in Controller"
	I0912 22:01:46.226997       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-947523-m02"
	I0912 22:01:50.266827       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-947523-m02"
	I0912 22:01:51.163881       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.887µs"
	I0912 22:01:52.035629       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.909µs"
	I0912 22:01:52.044403       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.731µs"
	I0912 22:01:52.045943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.142µs"
	I0912 22:01:53.068215       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0912 22:01:53.074573       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-2lnnj"
	I0912 22:01:53.077853       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4qwb4"
	I0912 22:01:53.085306       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="17.241652ms"
	I0912 22:01:53.090426       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.066969ms"
	I0912 22:01:53.090712       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.944µs"
	I0912 22:01:53.091313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="33.73µs"
	I0912 22:01:53.095116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="105.883µs"
	I0912 22:01:54.917357       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.895015ms"
	I0912 22:01:54.917446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="45.921µs"
	I0912 22:01:55.045327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="3.893704ms"
	I0912 22:01:55.045431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="51.723µs"
	
	* 
	* ==> kube-proxy [ff0df5800a74f53db4a1560a056abd502a2dae08c460dc601e57bb9a6f9dca11] <==
	* I0912 22:01:17.335296       1 server_others.go:69] "Using iptables proxy"
	I0912 22:01:17.346484       1 node.go:141] Successfully retrieved node IP: 192.168.58.2
	I0912 22:01:17.367016       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 22:01:17.369215       1 server_others.go:152] "Using iptables Proxier"
	I0912 22:01:17.369249       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0912 22:01:17.369256       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0912 22:01:17.369284       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 22:01:17.369487       1 server.go:846] "Version info" version="v1.28.1"
	I0912 22:01:17.369501       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:01:17.370154       1 config.go:188] "Starting service config controller"
	I0912 22:01:17.370174       1 config.go:97] "Starting endpoint slice config controller"
	I0912 22:01:17.370190       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 22:01:17.370191       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 22:01:17.370278       1 config.go:315] "Starting node config controller"
	I0912 22:01:17.370297       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 22:01:17.470416       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0912 22:01:17.470487       1 shared_informer.go:318] Caches are synced for service config
	I0912 22:01:17.470558       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [68b43a27c7cbb6a608be9323475bd70310015b425ce0dd6494dde29943168b36] <==
	* W0912 22:01:00.849806       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0912 22:01:00.849819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0912 22:01:00.849880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0912 22:01:00.849890       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0912 22:01:00.850292       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 22:01:00.850315       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0912 22:01:00.850373       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0912 22:01:00.850385       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0912 22:01:00.850545       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0912 22:01:00.850555       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0912 22:01:00.850590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 22:01:00.850600       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0912 22:01:00.850653       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0912 22:01:00.850661       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0912 22:01:01.657392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0912 22:01:01.657421       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0912 22:01:01.782447       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0912 22:01:01.782484       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0912 22:01:01.863309       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0912 22:01:01.863351       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0912 22:01:01.926704       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0912 22:01:01.926744       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0912 22:01:01.939015       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0912 22:01:01.939053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0912 22:01:02.344843       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* Sep 12 22:01:19 multinode-947523 kubelet[1594]: I0912 22:01:19.447923    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn566\" (UniqueName: \"kubernetes.io/projected/a925e809-5cce-4008-870d-3de1b67bbe83-kube-api-access-tn566\") pod \"coredns-5dd5756b68-m8mcv\" (UID: \"a925e809-5cce-4008-870d-3de1b67bbe83\") " pod="kube-system/coredns-5dd5756b68-m8mcv"
	Sep 12 22:01:19 multinode-947523 kubelet[1594]: I0912 22:01:19.448004    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvp5q\" (UniqueName: \"kubernetes.io/projected/7feda27b-75bb-445b-8ed7-331ebce33a72-kube-api-access-bvp5q\") pod \"storage-provisioner\" (UID: \"7feda27b-75bb-445b-8ed7-331ebce33a72\") " pod="kube-system/storage-provisioner"
	Sep 12 22:01:19 multinode-947523 kubelet[1594]: W0912 22:01:19.657469    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/crio-1dcca89e4d75bdfe2b6e88f75e165422be34749c5c832248eaf43943d2c4a3ec WatchSource:0}: Error finding container 1dcca89e4d75bdfe2b6e88f75e165422be34749c5c832248eaf43943d2c4a3ec: Status 404 returned error can't find the container with id 1dcca89e4d75bdfe2b6e88f75e165422be34749c5c832248eaf43943d2c4a3ec
	Sep 12 22:01:19 multinode-947523 kubelet[1594]: W0912 22:01:19.657695    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/crio-644f569f334b4402813fd0d2c65c65daa433bd076f34115766f5519cd50a0761 WatchSource:0}: Error finding container 644f569f334b4402813fd0d2c65c65daa433bd076f34115766f5519cd50a0761: Status 404 returned error can't find the container with id 644f569f334b4402813fd0d2c65c65daa433bd076f34115766f5519cd50a0761
	Sep 12 22:01:19 multinode-947523 kubelet[1594]: W0912 22:01:19.657880    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/crio-817c0396c5fe1146594250c812c9534f569adf2e24662556f16a94fd1ec56a70 WatchSource:0}: Error finding container 817c0396c5fe1146594250c812c9534f569adf2e24662556f16a94fd1ec56a70: Status 404 returned error can't find the container with id 817c0396c5fe1146594250c812c9534f569adf2e24662556f16a94fd1ec56a70
	Sep 12 22:01:19 multinode-947523 kubelet[1594]: I0912 22:01:19.967925    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.967869935 podCreationTimestamp="2023-09-12 22:01:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-12 22:01:19.967361814 +0000 UTC m=+16.208070379" watchObservedRunningTime="2023-09-12 22:01:19.967869935 +0000 UTC m=+16.208578571"
	Sep 12 22:01:19 multinode-947523 kubelet[1594]: I0912 22:01:19.977267    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6q54t" podStartSLOduration=3.977223403 podCreationTimestamp="2023-09-12 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-12 22:01:19.977138767 +0000 UTC m=+16.217847334" watchObservedRunningTime="2023-09-12 22:01:19.977223403 +0000 UTC m=+16.217931971"
	Sep 12 22:01:19 multinode-947523 kubelet[1594]: I0912 22:01:19.989023    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-m8mcv" podStartSLOduration=3.988963723 podCreationTimestamp="2023-09-12 22:01:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-12 22:01:19.988569099 +0000 UTC m=+16.229277759" watchObservedRunningTime="2023-09-12 22:01:19.988963723 +0000 UTC m=+16.229672291"
	Sep 12 22:01:51 multinode-947523 kubelet[1594]: I0912 22:01:51.322369    1594 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a925e809-5cce-4008-870d-3de1b67bbe83-config-volume\") pod \"a925e809-5cce-4008-870d-3de1b67bbe83\" (UID: \"a925e809-5cce-4008-870d-3de1b67bbe83\") "
	Sep 12 22:01:51 multinode-947523 kubelet[1594]: I0912 22:01:51.322428    1594 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn566\" (UniqueName: \"kubernetes.io/projected/a925e809-5cce-4008-870d-3de1b67bbe83-kube-api-access-tn566\") pod \"a925e809-5cce-4008-870d-3de1b67bbe83\" (UID: \"a925e809-5cce-4008-870d-3de1b67bbe83\") "
	Sep 12 22:01:51 multinode-947523 kubelet[1594]: I0912 22:01:51.322851    1594 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a925e809-5cce-4008-870d-3de1b67bbe83-config-volume" (OuterVolumeSpecName: "config-volume") pod "a925e809-5cce-4008-870d-3de1b67bbe83" (UID: "a925e809-5cce-4008-870d-3de1b67bbe83"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 12 22:01:51 multinode-947523 kubelet[1594]: I0912 22:01:51.324400    1594 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a925e809-5cce-4008-870d-3de1b67bbe83-kube-api-access-tn566" (OuterVolumeSpecName: "kube-api-access-tn566") pod "a925e809-5cce-4008-870d-3de1b67bbe83" (UID: "a925e809-5cce-4008-870d-3de1b67bbe83"). InnerVolumeSpecName "kube-api-access-tn566". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 12 22:01:51 multinode-947523 kubelet[1594]: I0912 22:01:51.422877    1594 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-tn566\" (UniqueName: \"kubernetes.io/projected/a925e809-5cce-4008-870d-3de1b67bbe83-kube-api-access-tn566\") on node \"multinode-947523\" DevicePath \"\""
	Sep 12 22:01:51 multinode-947523 kubelet[1594]: I0912 22:01:51.422913    1594 reconciler_common.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a925e809-5cce-4008-870d-3de1b67bbe83-config-volume\") on node \"multinode-947523\" DevicePath \"\""
	Sep 12 22:01:52 multinode-947523 kubelet[1594]: I0912 22:01:52.023970    1594 scope.go:117] "RemoveContainer" containerID="126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73"
	Sep 12 22:01:52 multinode-947523 kubelet[1594]: I0912 22:01:52.040930    1594 scope.go:117] "RemoveContainer" containerID="126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73"
	Sep 12 22:01:52 multinode-947523 kubelet[1594]: E0912 22:01:52.041357    1594 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73\": container with ID starting with 126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73 not found: ID does not exist" containerID="126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73"
	Sep 12 22:01:52 multinode-947523 kubelet[1594]: I0912 22:01:52.041478    1594 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73"} err="failed to get container status \"126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73\": rpc error: code = NotFound desc = could not find container \"126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73\": container with ID starting with 126478599bc53b5bedffeaedf62c6b1ff353d0b72bb8ad298117ef056441cd73 not found: ID does not exist"
	Sep 12 22:01:53 multinode-947523 kubelet[1594]: I0912 22:01:53.084677    1594 topology_manager.go:215] "Topology Admit Handler" podUID="7f3a88ac-9409-4215-9b10-bbbafb2b9654" podNamespace="default" podName="busybox-5bc68d56bd-4qwb4"
	Sep 12 22:01:53 multinode-947523 kubelet[1594]: E0912 22:01:53.084763    1594 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a925e809-5cce-4008-870d-3de1b67bbe83" containerName="coredns"
	Sep 12 22:01:53 multinode-947523 kubelet[1594]: I0912 22:01:53.084802    1594 memory_manager.go:346] "RemoveStaleState removing state" podUID="a925e809-5cce-4008-870d-3de1b67bbe83" containerName="coredns"
	Sep 12 22:01:53 multinode-947523 kubelet[1594]: I0912 22:01:53.234266    1594 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwq86\" (UniqueName: \"kubernetes.io/projected/7f3a88ac-9409-4215-9b10-bbbafb2b9654-kube-api-access-pwq86\") pod \"busybox-5bc68d56bd-4qwb4\" (UID: \"7f3a88ac-9409-4215-9b10-bbbafb2b9654\") " pod="default/busybox-5bc68d56bd-4qwb4"
	Sep 12 22:01:53 multinode-947523 kubelet[1594]: W0912 22:01:53.421510    1594 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/crio-45f74496442db4a64a60d510fda86fc8216a25c2c2304a5cd00d44ca447c6496 WatchSource:0}: Error finding container 45f74496442db4a64a60d510fda86fc8216a25c2c2304a5cd00d44ca447c6496: Status 404 returned error can't find the container with id 45f74496442db4a64a60d510fda86fc8216a25c2c2304a5cd00d44ca447c6496
	Sep 12 22:01:53 multinode-947523 kubelet[1594]: I0912 22:01:53.852014    1594 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a925e809-5cce-4008-870d-3de1b67bbe83" path="/var/lib/kubelet/pods/a925e809-5cce-4008-870d-3de1b67bbe83/volumes"
	Sep 12 22:01:55 multinode-947523 kubelet[1594]: I0912 22:01:55.041731    1594 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5bc68d56bd-4qwb4" podStartSLOduration=1.32566019 podCreationTimestamp="2023-09-12 22:01:53 +0000 UTC" firstStartedPulling="2023-09-12 22:01:53.425386136 +0000 UTC m=+49.666094694" lastFinishedPulling="2023-09-12 22:01:54.141405137 +0000 UTC m=+50.382113696" observedRunningTime="2023-09-12 22:01:55.041223997 +0000 UTC m=+51.281932565" watchObservedRunningTime="2023-09-12 22:01:55.041679192 +0000 UTC m=+51.282387758"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-947523 -n multinode-947523
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-947523 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.07s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (61.24s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.9.0.3923459532.exe start -p running-upgrade-694624 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.9.0.3923459532.exe start -p running-upgrade-694624 --memory=2200 --vm-driver=docker  --container-runtime=crio: (56.533565046s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-694624 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-694624 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (2.204979402s)

                                                
                                                
-- stdout --
	* [running-upgrade-694624] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17194
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node running-upgrade-694624 in cluster running-upgrade-694624
	* Pulling base image ...
	* Updating the running docker "running-upgrade-694624" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:14:25.590968  201879 out.go:296] Setting OutFile to fd 1 ...
	I0912 22:14:25.591103  201879 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:14:25.591119  201879 out.go:309] Setting ErrFile to fd 2...
	I0912 22:14:25.591127  201879 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:14:25.591348  201879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 22:14:25.591856  201879 out.go:303] Setting JSON to false
	I0912 22:14:25.593168  201879 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7014,"bootTime":1694549852,"procs":396,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:14:25.593237  201879 start.go:138] virtualization: kvm guest
	I0912 22:14:25.595098  201879 out.go:177] * [running-upgrade-694624] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:14:25.596684  201879 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 22:14:25.597851  201879 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:14:25.596752  201879 notify.go:220] Checking for updates...
	I0912 22:14:25.599168  201879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:14:25.600321  201879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 22:14:25.601521  201879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:14:25.602549  201879 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:14:25.603896  201879 config.go:182] Loaded profile config "running-upgrade-694624": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0912 22:14:25.603916  201879 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0912 22:14:25.605442  201879 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0912 22:14:25.606439  201879 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 22:14:25.629256  201879 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 22:14:25.629340  201879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:14:25.681384  201879 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:80 SystemTime:2023-09-12 22:14:25.672104483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:14:25.681484  201879 docker.go:294] overlay module found
	I0912 22:14:25.682894  201879 out.go:177] * Using the docker driver based on existing profile
	I0912 22:14:25.684084  201879 start.go:298] selected driver: docker
	I0912 22:14:25.684099  201879 start.go:902] validating driver "docker" against &{Name:running-upgrade-694624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-694624 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0912 22:14:25.684168  201879 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:14:25.684999  201879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:14:25.738704  201879 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:70 OomKillDisable:true NGoroutines:80 SystemTime:2023-09-12 22:14:25.727825509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:14:25.739037  201879 cni.go:84] Creating CNI manager for ""
	I0912 22:14:25.739064  201879 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0912 22:14:25.739073  201879 start_flags.go:321] config:
	{Name:running-upgrade-694624 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:running-upgrade-694624 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s}
	I0912 22:14:25.740657  201879 out.go:177] * Starting control plane node running-upgrade-694624 in cluster running-upgrade-694624
	I0912 22:14:25.741917  201879 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 22:14:25.743172  201879 out.go:177] * Pulling base image ...
	I0912 22:14:25.744512  201879 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0912 22:14:25.744534  201879 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 22:14:25.760839  201879 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0912 22:14:25.760861  201879 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	W0912 22:14:25.769578  201879 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0912 22:14:25.769706  201879 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/running-upgrade-694624/config.json ...
	I0912 22:14:25.769737  201879 cache.go:107] acquiring lock: {Name:mkef8b8cd217d4df25d6943c7dee7dac317c42c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:14:25.769784  201879 cache.go:107] acquiring lock: {Name:mk62a5351d80bcecab7d903a95fe7105573f891f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:14:25.769797  201879 cache.go:107] acquiring lock: {Name:mkcc2854003790005c1bdb0bc7330a9ad1846b6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:14:25.769747  201879 cache.go:107] acquiring lock: {Name:mk6e8f81d9e9350e15f36d5fd20923fb943bedd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:14:25.769844  201879 cache.go:115] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0912 22:14:25.769761  201879 cache.go:107] acquiring lock: {Name:mk1fada46c931131d9b8884d4dc652e9ec987203 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:14:25.769855  201879 cache.go:115] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0912 22:14:25.769833  201879 cache.go:107] acquiring lock: {Name:mkcd4361db70b818bcae2db01685403e2eb704b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:14:25.769863  201879 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 134.73µs
	I0912 22:14:25.769857  201879 cache.go:107] acquiring lock: {Name:mk7ad08ebae4de631c52ce5de7c70397dda4b1e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:14:25.769854  201879 cache.go:107] acquiring lock: {Name:mke00d55e0931fb6d20dad80ba7308835d5e1e16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:14:25.769865  201879 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 70.39µs
	I0912 22:14:25.769920  201879 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0912 22:14:25.769875  201879 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0912 22:14:25.769905  201879 cache.go:115] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 22:14:25.769940  201879 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 201.939µs
	I0912 22:14:25.769949  201879 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 22:14:25.769907  201879 cache.go:115] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0912 22:14:25.769958  201879 cache.go:115] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0912 22:14:25.769959  201879 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 178.321µs
	I0912 22:14:25.769969  201879 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0912 22:14:25.769977  201879 cache.go:115] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0912 22:14:25.769970  201879 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 155.238µs
	I0912 22:14:25.769993  201879 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 153.413µs
	I0912 22:14:25.770002  201879 cache.go:115] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0912 22:14:25.770007  201879 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0912 22:14:25.769997  201879 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0912 22:14:25.769989  201879 cache.go:195] Successfully downloaded all kic artifacts
	I0912 22:14:25.770009  201879 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 172.83µs
	I0912 22:14:25.770021  201879 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0912 22:14:25.769981  201879 cache.go:115] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0912 22:14:25.770031  201879 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 278.681µs
	I0912 22:14:25.770044  201879 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0912 22:14:25.770050  201879 cache.go:87] Successfully saved all images to host disk.
	I0912 22:14:25.770035  201879 start.go:365] acquiring machines lock for running-upgrade-694624: {Name:mk20a9902a1affdd8ec951d270fee5ecb3a2bb9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:14:25.770121  201879 start.go:369] acquired machines lock for "running-upgrade-694624" in 60.993µs
	I0912 22:14:25.770139  201879 start.go:96] Skipping create...Using existing machine configuration
	I0912 22:14:25.770143  201879 fix.go:54] fixHost starting: m01
	I0912 22:14:25.770409  201879 cli_runner.go:164] Run: docker container inspect running-upgrade-694624 --format={{.State.Status}}
	I0912 22:14:25.786740  201879 fix.go:102] recreateIfNeeded on running-upgrade-694624: state=Running err=<nil>
	W0912 22:14:25.786785  201879 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 22:14:25.788428  201879 out.go:177] * Updating the running docker "running-upgrade-694624" container ...
	I0912 22:14:25.789874  201879 machine.go:88] provisioning docker machine ...
	I0912 22:14:25.789914  201879 ubuntu.go:169] provisioning hostname "running-upgrade-694624"
	I0912 22:14:25.789969  201879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-694624
	I0912 22:14:25.809920  201879 main.go:141] libmachine: Using SSH client type: native
	I0912 22:14:25.810246  201879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0912 22:14:25.810261  201879 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-694624 && echo "running-upgrade-694624" | sudo tee /etc/hostname
	I0912 22:14:25.930095  201879 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-694624
	
	I0912 22:14:25.930185  201879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-694624
	I0912 22:14:25.949991  201879 main.go:141] libmachine: Using SSH client type: native
	I0912 22:14:25.950354  201879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0912 22:14:25.950373  201879 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-694624' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-694624/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-694624' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:14:26.061460  201879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:14:26.061502  201879 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17194-15878/.minikube CaCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17194-15878/.minikube}
	I0912 22:14:26.061526  201879 ubuntu.go:177] setting up certificates
	I0912 22:14:26.061538  201879 provision.go:83] configureAuth start
	I0912 22:14:26.061606  201879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-694624
	I0912 22:14:26.082254  201879 provision.go:138] copyHostCerts
	I0912 22:14:26.082316  201879 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem, removing ...
	I0912 22:14:26.082331  201879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 22:14:26.082399  201879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem (1082 bytes)
	I0912 22:14:26.082499  201879 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem, removing ...
	I0912 22:14:26.082512  201879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 22:14:26.082545  201879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem (1123 bytes)
	I0912 22:14:26.082618  201879 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem, removing ...
	I0912 22:14:26.082628  201879 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 22:14:26.082657  201879 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem (1679 bytes)
	I0912 22:14:26.082719  201879 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-694624 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube running-upgrade-694624]
	I0912 22:14:26.281090  201879 provision.go:172] copyRemoteCerts
	I0912 22:14:26.281152  201879 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:14:26.281201  201879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-694624
	I0912 22:14:26.302839  201879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/running-upgrade-694624/id_rsa Username:docker}
	I0912 22:14:26.389024  201879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:14:26.408992  201879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:14:26.428725  201879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0912 22:14:26.448877  201879 provision.go:86] duration metric: configureAuth took 387.320987ms
	I0912 22:14:26.448913  201879 ubuntu.go:193] setting minikube options for container-runtime
	I0912 22:14:26.449124  201879 config.go:182] Loaded profile config "running-upgrade-694624": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0912 22:14:26.449244  201879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-694624
	I0912 22:14:26.466467  201879 main.go:141] libmachine: Using SSH client type: native
	I0912 22:14:26.466801  201879 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32974 <nil> <nil>}
	I0912 22:14:26.466818  201879 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:14:26.924342  201879 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:14:26.924368  201879 machine.go:91] provisioned docker machine in 1.134477685s
	I0912 22:14:26.924379  201879 start.go:300] post-start starting for "running-upgrade-694624" (driver="docker")
	I0912 22:14:26.924392  201879 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:14:26.924458  201879 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:14:26.924518  201879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-694624
	I0912 22:14:26.941520  201879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/running-upgrade-694624/id_rsa Username:docker}
	I0912 22:14:27.028756  201879 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:14:27.031647  201879 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 22:14:27.031677  201879 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 22:14:27.031692  201879 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 22:14:27.031700  201879 info.go:137] Remote host: Ubuntu 19.10
	I0912 22:14:27.031715  201879 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/addons for local assets ...
	I0912 22:14:27.031775  201879 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/files for local assets ...
	I0912 22:14:27.031860  201879 filesync.go:149] local asset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> 226982.pem in /etc/ssl/certs
	I0912 22:14:27.031974  201879 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:14:27.038542  201879 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:14:27.055573  201879 start.go:303] post-start completed in 131.179776ms
	I0912 22:14:27.055643  201879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:14:27.055696  201879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-694624
	I0912 22:14:27.073288  201879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/running-upgrade-694624/id_rsa Username:docker}
	I0912 22:14:27.153400  201879 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 22:14:27.157379  201879 fix.go:56] fixHost completed within 1.387227413s
	I0912 22:14:27.157398  201879 start.go:83] releasing machines lock for "running-upgrade-694624", held for 1.387265105s
	I0912 22:14:27.157456  201879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" running-upgrade-694624
	I0912 22:14:27.173905  201879 ssh_runner.go:195] Run: cat /version.json
	I0912 22:14:27.173947  201879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-694624
	I0912 22:14:27.173984  201879 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:14:27.174047  201879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-694624
	I0912 22:14:27.191303  201879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/running-upgrade-694624/id_rsa Username:docker}
	I0912 22:14:27.192205  201879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32974 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/running-upgrade-694624/id_rsa Username:docker}
	W0912 22:14:27.333188  201879 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0912 22:14:27.333269  201879 ssh_runner.go:195] Run: systemctl --version
	I0912 22:14:27.337286  201879 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:14:27.388555  201879 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:14:27.393256  201879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:14:27.407590  201879 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0912 22:14:27.407669  201879 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:14:27.429346  201879 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 22:14:27.429371  201879 start.go:469] detecting cgroup driver to use...
	I0912 22:14:27.429405  201879 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0912 22:14:27.429455  201879 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:14:27.450749  201879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:14:27.459585  201879 docker.go:196] disabling cri-docker service (if available) ...
	I0912 22:14:27.459634  201879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:14:27.468209  201879 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:14:27.476799  201879 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0912 22:14:27.485015  201879 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0912 22:14:27.485067  201879 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:14:27.557629  201879 docker.go:212] disabling docker service ...
	I0912 22:14:27.557699  201879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:14:27.566855  201879 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:14:27.575640  201879 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:14:27.644078  201879 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:14:27.717720  201879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:14:27.727137  201879 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:14:27.739365  201879 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 22:14:27.739425  201879 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:14:27.748798  201879 out.go:177] 
	W0912 22:14:27.750101  201879 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0912 22:14:27.750121  201879 out.go:239] * 
	* 
	W0912 22:14:27.751224  201879 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 22:14:27.752535  201879 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-694624 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-09-12 22:14:27.769876949 +0000 UTC m=+1860.938985592
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-694624
helpers_test.go:235: (dbg) docker inspect running-upgrade-694624:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "80fd63f1ed6e1a2023be45060539af242711735d6bdf36c50c4b36869712464b",
	        "Created": "2023-09-12T22:13:29.327070201Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 193452,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-12T22:13:29.763515687Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/80fd63f1ed6e1a2023be45060539af242711735d6bdf36c50c4b36869712464b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/80fd63f1ed6e1a2023be45060539af242711735d6bdf36c50c4b36869712464b/hostname",
	        "HostsPath": "/var/lib/docker/containers/80fd63f1ed6e1a2023be45060539af242711735d6bdf36c50c4b36869712464b/hosts",
	        "LogPath": "/var/lib/docker/containers/80fd63f1ed6e1a2023be45060539af242711735d6bdf36c50c4b36869712464b/80fd63f1ed6e1a2023be45060539af242711735d6bdf36c50c4b36869712464b-json.log",
	        "Name": "/running-upgrade-694624",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-694624:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4ea375c4449e1a5f0926ab3609eb5bd1436224ed3e6f773aa599b9b279100f83-init/diff:/var/lib/docker/overlay2/fe9fe782711c79911ce1a3eeb75aed54b5dab65af5ca410f5655d6136a84573a/diff:/var/lib/docker/overlay2/bec7d23722f550b81b646d52fc281dcaee7d282fafc3810220362800d1453278/diff:/var/lib/docker/overlay2/abe1fbb5bdb6b65eb45d0756b5d02f5a8f70d4c9225646cc7244aba6414bf9e6/diff:/var/lib/docker/overlay2/57d39e23cb827c899caefba243e04b4de9e8bc9999bda191cbc38e5a53a680ac/diff:/var/lib/docker/overlay2/62db5c51f1eb7dd520a4edfa1249ca58ad7228f5bc96592e14f5677d81c6ee23/diff:/var/lib/docker/overlay2/912e186205fcde31214c10b53f7dee1c7e34e1335a29495d77045ba831961620/diff:/var/lib/docker/overlay2/b5fc26ee9c55dc236e4acc58f7cfa40644aa3011c1478338cbee6818f2851d53/diff:/var/lib/docker/overlay2/6199872b680fa7ed1e58dd1469bfa1fb7f7f4f0ce95ad1630c34c88d9c5ecd32/diff:/var/lib/docker/overlay2/b686b5bba523a19fcf10537804418c8f6d28de6fe90ca07b58023c0e857e8217/diff:/var/lib/docker/overlay2/f88fbf
a1c44dff3e47c45fa186890f476ecf135aa829c7e411b8365979e7964b/diff:/var/lib/docker/overlay2/901696f890da9d07b3eaf0598c0f2cf79ddd6e3040fe9729b8dd4efe088cb6ab/diff:/var/lib/docker/overlay2/6e4acf9cba4f6a611bd1cf777d953c66c6612aada6152ec9e17c55d98cfe8cb2/diff:/var/lib/docker/overlay2/27bd756d9a29b15f9fd4bd7fd52729b8a8e451037cea9418d8c9b0894ff521c3/diff:/var/lib/docker/overlay2/7d8ff628085027fb2295dbe8d5817d5e84f4e43c6ea16556b079f241f7a01e44/diff:/var/lib/docker/overlay2/148a0e4a8ea3142d919b2aba0bad51bba2566d94b09d832988150d484ca67e51/diff:/var/lib/docker/overlay2/6f4e7fec6a7045ed48a83d037c589f656675f16c998c78b60e66276cc3b26a1f/diff:/var/lib/docker/overlay2/82383e08633bdd2380a3602232b2c30df685c0db2cef20923701b7aa410ba99e/diff:/var/lib/docker/overlay2/155c8ff9ec9208a212b264fc85f75c1abd3b90a18c72e8e9f8dfe55a4477be2a/diff:/var/lib/docker/overlay2/f3f312f729055e67ef5c07b86f48d6816daf1b0676c291a937bc461759077de2/diff:/var/lib/docker/overlay2/f57bb783b006dbda938bb7c4e74f8dbec016e660438f27419940f9c6368a2bee/diff:/var/lib/d
ocker/overlay2/30f20839834bfc0bea18addae027bd79b6d69c19fb4f040b58ec81cbe66a0e85/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4ea375c4449e1a5f0926ab3609eb5bd1436224ed3e6f773aa599b9b279100f83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4ea375c4449e1a5f0926ab3609eb5bd1436224ed3e6f773aa599b9b279100f83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4ea375c4449e1a5f0926ab3609eb5bd1436224ed3e6f773aa599b9b279100f83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-694624",
	                "Source": "/var/lib/docker/volumes/running-upgrade-694624/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-694624",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-694624",
	                "name.minikube.sigs.k8s.io": "running-upgrade-694624",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3ccd1fc1cb419e5b0605ad2d8783c43d5a09d9040b5e9b6c446a425552f6f4a4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32974"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32973"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32972"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/3ccd1fc1cb41",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "5104841f0844c21cdba4e86e5938c13bdc0789271a0780e3026536e0fbc61943",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "fdb72cec916471b08e4b7a50ee360732fc0af9c371eef70c4c7deff99cdfae73",
	                    "EndpointID": "5104841f0844c21cdba4e86e5938c13bdc0789271a0780e3026536e0fbc61943",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-694624 -n running-upgrade-694624
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-694624 -n running-upgrade-694624: exit status 4 (271.116789ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:14:28.029352  202433 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-694624" does not appear in /home/jenkins/minikube-integration/17194-15878/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-694624" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-694624" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-694624
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-694624: (1.855644428s)
--- FAIL: TestRunningBinaryUpgrade (61.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.9.0.2430443770.exe start -p stopped-upgrade-950672 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.9.0.2430443770.exe start -p stopped-upgrade-950672 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m23.669515411s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.9.0.2430443770.exe -p stopped-upgrade-950672 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.9.0.2430443770.exe -p stopped-upgrade-950672 stop: (3.228414568s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-950672 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-950672 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90 (5.518638065s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-950672] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17194
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the docker driver based on existing profile
	* Starting control plane node stopped-upgrade-950672 in cluster stopped-upgrade-950672
	* Pulling base image ...
	* Restarting existing docker container for "stopped-upgrade-950672" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:13:20.793662  191515 out.go:296] Setting OutFile to fd 1 ...
	I0912 22:13:20.793796  191515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:13:20.793810  191515 out.go:309] Setting ErrFile to fd 2...
	I0912 22:13:20.793818  191515 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:13:20.794031  191515 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 22:13:20.794994  191515 out.go:303] Setting JSON to false
	I0912 22:13:20.796819  191515 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6949,"bootTime":1694549852,"procs":416,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:13:20.796891  191515 start.go:138] virtualization: kvm guest
	I0912 22:13:20.798955  191515 out.go:177] * [stopped-upgrade-950672] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:13:20.800864  191515 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 22:13:20.800928  191515 notify.go:220] Checking for updates...
	I0912 22:13:20.802295  191515 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:13:20.803692  191515 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:13:20.805073  191515 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 22:13:20.806329  191515 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:13:20.807772  191515 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:13:20.809576  191515 config.go:182] Loaded profile config "stopped-upgrade-950672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0912 22:13:20.809602  191515 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0912 22:13:20.811681  191515 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0912 22:13:20.813088  191515 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 22:13:20.835545  191515 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 22:13:20.835652  191515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:13:20.896898  191515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:65 SystemTime:2023-09-12 22:13:20.887761645 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:13:20.897032  191515 docker.go:294] overlay module found
	I0912 22:13:20.898782  191515 out.go:177] * Using the docker driver based on existing profile
	I0912 22:13:20.900246  191515 start.go:298] selected driver: docker
	I0912 22:13:20.900261  191515 start.go:902] validating driver "docker" against &{Name:stopped-upgrade-950672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-950672 Namespace: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0912 22:13:20.900344  191515 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:13:20.901192  191515 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:13:20.961186  191515 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:true NGoroutines:65 SystemTime:2023-09-12 22:13:20.951446133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:13:20.961469  191515 cni.go:84] Creating CNI manager for ""
	I0912 22:13:20.961491  191515 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0912 22:13:20.961501  191515 start_flags.go:321] config:
	{Name:stopped-upgrade-950672 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:stopped-upgrade-950672 Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.17.0.2 Port:8443 KubernetesVersion:v1.18.0 ContainerRuntime: ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Auto
PauseInterval:0s}
	I0912 22:13:20.963385  191515 out.go:177] * Starting control plane node stopped-upgrade-950672 in cluster stopped-upgrade-950672
	I0912 22:13:20.964812  191515 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 22:13:20.966214  191515 out.go:177] * Pulling base image ...
	I0912 22:13:20.967502  191515 preload.go:132] Checking if preload exists for k8s version v1.18.0 and runtime crio
	I0912 22:13:20.967607  191515 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 22:13:20.984100  191515 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0912 22:13:20.984133  191515 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	W0912 22:13:21.016034  191515 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.0/preloaded-images-k8s-v18-v1.18.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0912 22:13:21.016202  191515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/stopped-upgrade-950672/config.json ...
	I0912 22:13:21.016299  191515 cache.go:107] acquiring lock: {Name:mk6e8f81d9e9350e15f36d5fd20923fb943bedd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:13:21.016322  191515 cache.go:107] acquiring lock: {Name:mk1fada46c931131d9b8884d4dc652e9ec987203 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:13:21.016379  191515 cache.go:107] acquiring lock: {Name:mke00d55e0931fb6d20dad80ba7308835d5e1e16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:13:21.016408  191515 cache.go:107] acquiring lock: {Name:mkcc2854003790005c1bdb0bc7330a9ad1846b6b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:13:21.016437  191515 cache.go:107] acquiring lock: {Name:mkef8b8cd217d4df25d6943c7dee7dac317c42c2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:13:21.016490  191515 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.0
	I0912 22:13:21.016499  191515 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.0
	I0912 22:13:21.016507  191515 cache.go:195] Successfully downloaded all kic artifacts
	I0912 22:13:21.016538  191515 start.go:365] acquiring machines lock for stopped-upgrade-950672: {Name:mk88b1ae294274bd69d1e6e45d3ced085c0502e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:13:21.016543  191515 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0912 22:13:21.016524  191515 cache.go:107] acquiring lock: {Name:mkcd4361db70b818bcae2db01685403e2eb704b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:13:21.016561  191515 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.0
	I0912 22:13:21.016609  191515 cache.go:115] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0912 22:13:21.016375  191515 cache.go:107] acquiring lock: {Name:mk62a5351d80bcecab7d903a95fe7105573f891f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:13:21.016826  191515 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 529.423µs
	I0912 22:13:21.016851  191515 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0912 22:13:21.016628  191515 start.go:369] acquired machines lock for "stopped-upgrade-950672" in 42.609µs
	I0912 22:13:21.016938  191515 start.go:96] Skipping create...Using existing machine configuration
	I0912 22:13:21.016946  191515 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0912 22:13:21.017000  191515 fix.go:54] fixHost starting: m01
	I0912 22:13:21.016714  191515 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0912 22:13:21.017369  191515 cli_runner.go:164] Run: docker container inspect stopped-upgrade-950672 --format={{.State.Status}}
	I0912 22:13:21.016735  191515 cache.go:107] acquiring lock: {Name:mk7ad08ebae4de631c52ce5de7c70397dda4b1e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:13:21.017561  191515 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0912 22:13:21.017578  191515 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.0
	I0912 22:13:21.017604  191515 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.0
	I0912 22:13:21.017577  191515 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0912 22:13:21.017686  191515 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.0
	I0912 22:13:21.018347  191515 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.0
	I0912 22:13:21.018828  191515 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0912 22:13:21.019054  191515 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0912 22:13:21.038208  191515 fix.go:102] recreateIfNeeded on stopped-upgrade-950672: state=Stopped err=<nil>
	W0912 22:13:21.038249  191515 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 22:13:21.040289  191515 out.go:177] * Restarting existing docker container for "stopped-upgrade-950672" ...
	I0912 22:13:21.041744  191515 cli_runner.go:164] Run: docker start stopped-upgrade-950672
	I0912 22:13:21.187386  191515 cache.go:162] opening:  /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0
	I0912 22:13:21.199373  191515 cache.go:162] opening:  /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0
	I0912 22:13:21.204233  191515 cache.go:162] opening:  /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0
	I0912 22:13:21.226146  191515 cache.go:162] opening:  /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0912 22:13:21.229414  191515 cache.go:162] opening:  /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0912 22:13:21.241677  191515 cache.go:162] opening:  /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0912 22:13:21.242605  191515 cache.go:162] opening:  /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0
	I0912 22:13:21.304379  191515 cache.go:157] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 exists
	I0912 22:13:21.304448  191515 cache.go:96] cache image "registry.k8s.io/pause:3.2" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2" took 287.740384ms
	I0912 22:13:21.304470  191515 cache.go:80] save to tar file registry.k8s.io/pause:3.2 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2 succeeded
	I0912 22:13:21.317897  191515 cli_runner.go:164] Run: docker container inspect stopped-upgrade-950672 --format={{.State.Status}}
	I0912 22:13:21.338345  191515 kic.go:426] container "stopped-upgrade-950672" state is running.
	I0912 22:13:21.338686  191515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-950672
	I0912 22:13:21.360996  191515 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/stopped-upgrade-950672/config.json ...
	I0912 22:13:21.361246  191515 machine.go:88] provisioning docker machine ...
	I0912 22:13:21.361273  191515 ubuntu.go:169] provisioning hostname "stopped-upgrade-950672"
	I0912 22:13:21.361320  191515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-950672
	I0912 22:13:21.384931  191515 main.go:141] libmachine: Using SSH client type: native
	I0912 22:13:21.385360  191515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32971 <nil> <nil>}
	I0912 22:13:21.385377  191515 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-950672 && echo "stopped-upgrade-950672" | sudo tee /etc/hostname
	I0912 22:13:21.385987  191515 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46182->127.0.0.1:32971: read: connection reset by peer
	I0912 22:13:21.754723  191515 cache.go:157] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 exists
	I0912 22:13:21.754751  191515 cache.go:96] cache image "registry.k8s.io/coredns:1.6.7" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7" took 738.345174ms
	I0912 22:13:21.754767  191515 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.7 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7 succeeded
	I0912 22:13:21.951745  191515 cache.go:157] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 exists
	I0912 22:13:21.951776  191515 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.18.0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0" took 935.39733ms
	I0912 22:13:21.951794  191515 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.18.0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.0 succeeded
	I0912 22:13:22.246339  191515 cache.go:157] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 exists
	I0912 22:13:22.246371  191515 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.18.0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0" took 1.23000431s
	I0912 22:13:22.246389  191515 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.18.0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.0 succeeded
	I0912 22:13:22.256524  191515 cache.go:157] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 exists
	I0912 22:13:22.256553  191515 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.18.0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0" took 1.240133302s
	I0912 22:13:22.256570  191515 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.18.0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.0 succeeded
	I0912 22:13:22.497730  191515 cache.go:157] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 exists
	I0912 22:13:22.497756  191515 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.18.0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0" took 1.481444983s
	I0912 22:13:22.497768  191515 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.18.0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.0 succeeded
	I0912 22:13:22.631200  191515 cache.go:157] /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0912 22:13:22.631222  191515 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.614737806s
	I0912 22:13:22.631234  191515 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0912 22:13:22.631251  191515 cache.go:87] Successfully saved all images to host disk.
	I0912 22:13:24.505112  191515 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-950672
	
	I0912 22:13:24.505187  191515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-950672
	I0912 22:13:24.522347  191515 main.go:141] libmachine: Using SSH client type: native
	I0912 22:13:24.522663  191515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32971 <nil> <nil>}
	I0912 22:13:24.522689  191515 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-950672' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-950672/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-950672' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:13:24.628562  191515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:13:24.628616  191515 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17194-15878/.minikube CaCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17194-15878/.minikube}
	I0912 22:13:24.628646  191515 ubuntu.go:177] setting up certificates
	I0912 22:13:24.628654  191515 provision.go:83] configureAuth start
	I0912 22:13:24.628703  191515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-950672
	I0912 22:13:24.645305  191515 provision.go:138] copyHostCerts
	I0912 22:13:24.645379  191515 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem, removing ...
	I0912 22:13:24.645394  191515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 22:13:24.645471  191515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem (1082 bytes)
	I0912 22:13:24.645582  191515 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem, removing ...
	I0912 22:13:24.645596  191515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 22:13:24.645637  191515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem (1123 bytes)
	I0912 22:13:24.645740  191515 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem, removing ...
	I0912 22:13:24.645752  191515 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 22:13:24.645791  191515 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem (1679 bytes)
	I0912 22:13:24.645868  191515 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-950672 san=[172.17.0.2 127.0.0.1 localhost 127.0.0.1 minikube stopped-upgrade-950672]
	I0912 22:13:24.807523  191515 provision.go:172] copyRemoteCerts
	I0912 22:13:24.807592  191515 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:13:24.807644  191515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-950672
	I0912 22:13:24.824060  191515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/stopped-upgrade-950672/id_rsa Username:docker}
	I0912 22:13:24.903600  191515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:13:24.919835  191515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0912 22:13:24.936455  191515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:13:24.953798  191515 provision.go:86] duration metric: configureAuth took 325.127643ms
	I0912 22:13:24.953823  191515 ubuntu.go:193] setting minikube options for container-runtime
	I0912 22:13:24.954004  191515 config.go:182] Loaded profile config "stopped-upgrade-950672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0912 22:13:24.954088  191515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-950672
	I0912 22:13:24.970484  191515 main.go:141] libmachine: Using SSH client type: native
	I0912 22:13:24.970803  191515 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32971 <nil> <nil>}
	I0912 22:13:24.970818  191515 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:13:25.500167  191515 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:13:25.500196  191515 machine.go:91] provisioned docker machine in 4.138933464s
	I0912 22:13:25.500208  191515 start.go:300] post-start starting for "stopped-upgrade-950672" (driver="docker")
	I0912 22:13:25.500219  191515 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:13:25.500265  191515 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:13:25.500293  191515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-950672
	I0912 22:13:25.516815  191515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/stopped-upgrade-950672/id_rsa Username:docker}
	I0912 22:13:25.595556  191515 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:13:25.598110  191515 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 22:13:25.598136  191515 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 22:13:25.598151  191515 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 22:13:25.598167  191515 info.go:137] Remote host: Ubuntu 19.10
	I0912 22:13:25.598175  191515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/addons for local assets ...
	I0912 22:13:25.598223  191515 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/files for local assets ...
	I0912 22:13:25.598287  191515 filesync.go:149] local asset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> 226982.pem in /etc/ssl/certs
	I0912 22:13:25.598364  191515 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:13:25.604655  191515 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:13:25.620724  191515 start.go:303] post-start completed in 120.502302ms
	I0912 22:13:25.620785  191515 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:13:25.620818  191515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-950672
	I0912 22:13:25.637323  191515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/stopped-upgrade-950672/id_rsa Username:docker}
	I0912 22:13:25.716987  191515 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 22:13:25.720791  191515 fix.go:56] fixHost completed within 4.703836557s
	I0912 22:13:25.720822  191515 start.go:83] releasing machines lock for "stopped-upgrade-950672", held for 4.703898678s
	I0912 22:13:25.720890  191515 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" stopped-upgrade-950672
	I0912 22:13:25.737345  191515 ssh_runner.go:195] Run: cat /version.json
	I0912 22:13:25.737390  191515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-950672
	I0912 22:13:25.737415  191515 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:13:25.737475  191515 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" stopped-upgrade-950672
	I0912 22:13:25.754609  191515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/stopped-upgrade-950672/id_rsa Username:docker}
	I0912 22:13:25.754849  191515 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32971 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/stopped-upgrade-950672/id_rsa Username:docker}
	W0912 22:13:25.880031  191515 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0912 22:13:25.880139  191515 ssh_runner.go:195] Run: systemctl --version
	I0912 22:13:25.883921  191515 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:13:25.935894  191515 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:13:25.940025  191515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:13:25.954474  191515 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0912 22:13:25.954543  191515 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:13:25.975105  191515 cni.go:262] disabled [/etc/cni/net.d/100-crio-bridge.conf, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0912 22:13:25.975127  191515 start.go:469] detecting cgroup driver to use...
	I0912 22:13:25.975161  191515 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0912 22:13:25.975214  191515 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:13:25.994425  191515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:13:26.002775  191515 docker.go:196] disabling cri-docker service (if available) ...
	I0912 22:13:26.002827  191515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:13:26.011336  191515 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:13:26.019478  191515 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0912 22:13:26.027941  191515 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0912 22:13:26.027991  191515 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:13:26.093289  191515 docker.go:212] disabling docker service ...
	I0912 22:13:26.093338  191515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:13:26.102061  191515 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:13:26.110748  191515 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:13:26.169680  191515 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:13:26.238272  191515 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:13:26.246966  191515 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:13:26.258895  191515 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0912 22:13:26.258962  191515 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:13:26.268007  191515 out.go:177] 
	W0912 22:13:26.269427  191515 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 2
	stdout:
	
	stderr:
	sed: can't read /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0912 22:13:26.269446  191515 out.go:239] * 
	* 
	W0912 22:13:26.270455  191515 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0912 22:13:26.272057  191515 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.9.0 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-950672 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (92.42s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.91s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-959901 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-959901 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.978039496s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-959901] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17194
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node pause-959901 in cluster pause-959901
	* Pulling base image ...
	* Updating the running docker "pause-959901" container ...
	* Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	* Configuring CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-959901" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:15:11.444112  211844 out.go:296] Setting OutFile to fd 1 ...
	I0912 22:15:11.444406  211844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:15:11.444417  211844 out.go:309] Setting ErrFile to fd 2...
	I0912 22:15:11.444425  211844 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:15:11.444663  211844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 22:15:11.445209  211844 out.go:303] Setting JSON to false
	I0912 22:15:11.446684  211844 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7060,"bootTime":1694549852,"procs":508,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:15:11.446750  211844 start.go:138] virtualization: kvm guest
	I0912 22:15:11.448944  211844 out.go:177] * [pause-959901] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:15:11.450744  211844 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 22:15:11.450760  211844 notify.go:220] Checking for updates...
	I0912 22:15:11.452106  211844 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:15:11.453715  211844 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:15:11.455050  211844 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 22:15:11.456334  211844 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:15:11.462228  211844 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:15:11.464826  211844 config.go:182] Loaded profile config "pause-959901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:11.465637  211844 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 22:15:11.488463  211844 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 22:15:11.488539  211844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:15:11.554216  211844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:76 SystemTime:2023-09-12 22:15:11.540030247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:15:11.554342  211844 docker.go:294] overlay module found
	I0912 22:15:11.556165  211844 out.go:177] * Using the docker driver based on existing profile
	I0912 22:15:11.557636  211844 start.go:298] selected driver: docker
	I0912 22:15:11.557653  211844 start.go:902] validating driver "docker" against &{Name:pause-959901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-959901 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-c
reds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:15:11.557804  211844 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:15:11.557898  211844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:15:11.618873  211844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:76 SystemTime:2023-09-12 22:15:11.609718981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:15:11.619457  211844 cni.go:84] Creating CNI manager for ""
	I0912 22:15:11.619473  211844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 22:15:11.619484  211844 start_flags.go:321] config:
	{Name:pause-959901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-959901 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesna
pshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:15:11.622038  211844 out.go:177] * Starting control plane node pause-959901 in cluster pause-959901
	I0912 22:15:11.623353  211844 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 22:15:11.624670  211844 out.go:177] * Pulling base image ...
	I0912 22:15:11.625871  211844 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:15:11.625912  211844 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0912 22:15:11.625912  211844 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 22:15:11.625927  211844 cache.go:57] Caching tarball of preloaded images
	I0912 22:15:11.626017  211844 preload.go:174] Found /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:15:11.626028  211844 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0912 22:15:11.626178  211844 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/config.json ...
	I0912 22:15:11.644529  211844 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0912 22:15:11.644554  211844 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	I0912 22:15:11.644578  211844 cache.go:195] Successfully downloaded all kic artifacts
	I0912 22:15:11.644636  211844 start.go:365] acquiring machines lock for pause-959901: {Name:mk94b5140e161f7d5ec5a0cb77ce923805e50ad0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:15:11.644732  211844 start.go:369] acquired machines lock for "pause-959901" in 51.101µs
	I0912 22:15:11.644754  211844 start.go:96] Skipping create...Using existing machine configuration
	I0912 22:15:11.644764  211844 fix.go:54] fixHost starting: 
	I0912 22:15:11.644994  211844 cli_runner.go:164] Run: docker container inspect pause-959901 --format={{.State.Status}}
	I0912 22:15:11.665516  211844 fix.go:102] recreateIfNeeded on pause-959901: state=Running err=<nil>
	W0912 22:15:11.665544  211844 fix.go:128] unexpected machine state, will restart: <nil>
	I0912 22:15:11.667468  211844 out.go:177] * Updating the running docker "pause-959901" container ...
	I0912 22:15:11.668772  211844 machine.go:88] provisioning docker machine ...
	I0912 22:15:11.668799  211844 ubuntu.go:169] provisioning hostname "pause-959901"
	I0912 22:15:11.668870  211844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-959901
	I0912 22:15:11.690343  211844 main.go:141] libmachine: Using SSH client type: native
	I0912 22:15:11.690853  211844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0912 22:15:11.690877  211844 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-959901 && echo "pause-959901" | sudo tee /etc/hostname
	I0912 22:15:11.844912  211844 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-959901
	
	I0912 22:15:11.845012  211844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-959901
	I0912 22:15:11.864161  211844 main.go:141] libmachine: Using SSH client type: native
	I0912 22:15:11.864534  211844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0912 22:15:11.864555  211844 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-959901' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-959901/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-959901' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:15:12.008762  211844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:15:12.008792  211844 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17194-15878/.minikube CaCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17194-15878/.minikube}
	I0912 22:15:12.008814  211844 ubuntu.go:177] setting up certificates
	I0912 22:15:12.008825  211844 provision.go:83] configureAuth start
	I0912 22:15:12.008891  211844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-959901
	I0912 22:15:12.028908  211844 provision.go:138] copyHostCerts
	I0912 22:15:12.028971  211844 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem, removing ...
	I0912 22:15:12.028987  211844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 22:15:12.029055  211844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem (1082 bytes)
	I0912 22:15:12.029159  211844 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem, removing ...
	I0912 22:15:12.029171  211844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 22:15:12.029205  211844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem (1123 bytes)
	I0912 22:15:12.029272  211844 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem, removing ...
	I0912 22:15:12.029281  211844 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 22:15:12.029301  211844 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem (1679 bytes)
	I0912 22:15:12.029358  211844 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem org=jenkins.pause-959901 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube pause-959901]
	I0912 22:15:12.326916  211844 provision.go:172] copyRemoteCerts
	I0912 22:15:12.327043  211844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:15:12.327103  211844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-959901
	I0912 22:15:12.350158  211844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/pause-959901/id_rsa Username:docker}
	I0912 22:15:12.449042  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:15:12.476801  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0912 22:15:12.505626  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:15:12.529205  211844 provision.go:86] duration metric: configureAuth took 520.36528ms
	I0912 22:15:12.529231  211844 ubuntu.go:193] setting minikube options for container-runtime
	I0912 22:15:12.529469  211844 config.go:182] Loaded profile config "pause-959901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:12.529579  211844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-959901
	I0912 22:15:12.566336  211844 main.go:141] libmachine: Using SSH client type: native
	I0912 22:15:12.566816  211844 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32984 <nil> <nil>}
	I0912 22:15:12.566848  211844 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:15:17.987139  211844 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:15:17.987165  211844 machine.go:91] provisioned docker machine in 6.318381398s
	I0912 22:15:17.987183  211844 start.go:300] post-start starting for "pause-959901" (driver="docker")
	I0912 22:15:17.987196  211844 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:15:17.987261  211844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:15:17.987306  211844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-959901
	I0912 22:15:18.006050  211844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/pause-959901/id_rsa Username:docker}
	I0912 22:15:18.129569  211844 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:15:18.132892  211844 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 22:15:18.132931  211844 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 22:15:18.132944  211844 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 22:15:18.132953  211844 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0912 22:15:18.132968  211844 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/addons for local assets ...
	I0912 22:15:18.133025  211844 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/files for local assets ...
	I0912 22:15:18.133122  211844 filesync.go:149] local asset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> 226982.pem in /etc/ssl/certs
	I0912 22:15:18.133238  211844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:15:18.141745  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:15:18.164799  211844 start.go:303] post-start completed in 177.596604ms
	I0912 22:15:18.164879  211844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:15:18.164924  211844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-959901
	I0912 22:15:18.185943  211844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/pause-959901/id_rsa Username:docker}
	I0912 22:15:18.285900  211844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 22:15:18.290392  211844 fix.go:56] fixHost completed within 6.645624082s
	I0912 22:15:18.290410  211844 start.go:83] releasing machines lock for "pause-959901", held for 6.645667433s
	I0912 22:15:18.290467  211844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-959901
	I0912 22:15:18.308872  211844 ssh_runner.go:195] Run: cat /version.json
	I0912 22:15:18.308912  211844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-959901
	I0912 22:15:18.308946  211844 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:15:18.309018  211844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-959901
	I0912 22:15:18.328126  211844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/pause-959901/id_rsa Username:docker}
	I0912 22:15:18.329509  211844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32984 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/pause-959901/id_rsa Username:docker}
	I0912 22:15:18.424423  211844 ssh_runner.go:195] Run: systemctl --version
	I0912 22:15:18.520728  211844 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:15:18.664138  211844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:15:18.668624  211844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:15:18.676858  211844 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0912 22:15:18.676941  211844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:15:18.684793  211844 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0912 22:15:18.684818  211844 start.go:469] detecting cgroup driver to use...
	I0912 22:15:18.684856  211844 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0912 22:15:18.684898  211844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:15:18.699539  211844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:15:18.713945  211844 docker.go:196] disabling cri-docker service (if available) ...
	I0912 22:15:18.714015  211844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:15:18.735195  211844 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:15:18.747715  211844 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:15:19.155459  211844 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:15:19.636021  211844 docker.go:212] disabling docker service ...
	I0912 22:15:19.636100  211844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:15:19.653070  211844 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:15:19.727500  211844 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:15:20.044559  211844 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:15:20.335071  211844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:15:20.347450  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:15:20.366084  211844 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0912 22:15:20.366169  211844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:15:20.440332  211844 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:15:20.440398  211844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:15:20.469229  211844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:15:20.481333  211844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:15:20.530405  211844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:15:20.563102  211844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:15:20.578722  211844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:15:20.643948  211844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:15:20.922217  211844 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:15:29.387148  211844 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.464891823s)
	I0912 22:15:29.387196  211844 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:15:29.387250  211844 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:15:29.390617  211844 start.go:537] Will wait 60s for crictl version
	I0912 22:15:29.390662  211844 ssh_runner.go:195] Run: which crictl
	I0912 22:15:29.393631  211844 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:15:29.431160  211844 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0912 22:15:29.431243  211844 ssh_runner.go:195] Run: crio --version
	I0912 22:15:29.470106  211844 ssh_runner.go:195] Run: crio --version
	I0912 22:15:29.510617  211844 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0912 22:15:29.511994  211844 cli_runner.go:164] Run: docker network inspect pause-959901 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:15:29.529368  211844 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0912 22:15:29.534309  211844 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:15:29.534375  211844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:15:29.574208  211844 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 22:15:29.574243  211844 crio.go:415] Images already preloaded, skipping extraction
	I0912 22:15:29.574295  211844 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:15:29.611366  211844 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 22:15:29.611391  211844 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:15:29.611461  211844 ssh_runner.go:195] Run: crio config
	I0912 22:15:29.672234  211844 cni.go:84] Creating CNI manager for ""
	I0912 22:15:29.672257  211844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 22:15:29.672277  211844 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 22:15:29.672302  211844 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-959901 NodeName:pause-959901 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:15:29.672470  211844 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-959901"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:15:29.672554  211844 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=pause-959901 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-959901 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0912 22:15:29.672652  211844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 22:15:29.681266  211844 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:15:29.681350  211844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:15:29.689834  211844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (422 bytes)
	I0912 22:15:29.705776  211844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:15:29.721581  211844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2093 bytes)
	I0912 22:15:29.737899  211844 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0912 22:15:29.741130  211844 certs.go:56] Setting up /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901 for IP: 192.168.94.2
	I0912 22:15:29.741170  211844 certs.go:190] acquiring lock for shared ca certs: {Name:mk61327f1fa12512fba6a15661f030034d23bf2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:15:29.741301  211844 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key
	I0912 22:15:29.741355  211844 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key
	I0912 22:15:29.741437  211844 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/client.key
	I0912 22:15:29.741487  211844 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/apiserver.key.ad8e880a
	I0912 22:15:29.741519  211844 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/proxy-client.key
	I0912 22:15:29.741620  211844 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem (1338 bytes)
	W0912 22:15:29.741648  211844 certs.go:433] ignoring /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698_empty.pem, impossibly tiny 0 bytes
	I0912 22:15:29.741658  211844 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 22:15:29.741683  211844 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:15:29.741706  211844 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:15:29.741732  211844 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem (1679 bytes)
	I0912 22:15:29.741768  211844 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:15:29.742354  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 22:15:29.766734  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 22:15:29.787856  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:15:29.812772  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0912 22:15:29.833797  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:15:29.858499  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:15:29.879642  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:15:29.900738  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 22:15:29.922981  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:15:29.945265  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem --> /usr/share/ca-certificates/22698.pem (1338 bytes)
	I0912 22:15:29.967164  211844 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /usr/share/ca-certificates/226982.pem (1708 bytes)
	I0912 22:15:29.990871  211844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:15:30.007570  211844 ssh_runner.go:195] Run: openssl version
	I0912 22:15:30.012759  211844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/226982.pem && ln -fs /usr/share/ca-certificates/226982.pem /etc/ssl/certs/226982.pem"
	I0912 22:15:30.022012  211844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/226982.pem
	I0912 22:15:30.025692  211844 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:49 /usr/share/ca-certificates/226982.pem
	I0912 22:15:30.025744  211844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/226982.pem
	I0912 22:15:30.033075  211844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/226982.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:15:30.041704  211844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:15:30.050968  211844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:15:30.054354  211844 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:15:30.054412  211844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:15:30.061058  211844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:15:30.069453  211844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22698.pem && ln -fs /usr/share/ca-certificates/22698.pem /etc/ssl/certs/22698.pem"
	I0912 22:15:30.078776  211844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22698.pem
	I0912 22:15:30.082254  211844 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:49 /usr/share/ca-certificates/22698.pem
	I0912 22:15:30.082316  211844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22698.pem
	I0912 22:15:30.089451  211844 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22698.pem /etc/ssl/certs/51391683.0"
	I0912 22:15:30.097889  211844 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 22:15:30.100844  211844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0912 22:15:30.107091  211844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0912 22:15:30.113626  211844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0912 22:15:30.119800  211844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0912 22:15:30.126517  211844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0912 22:15:30.133147  211844 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0912 22:15:30.139506  211844 kubeadm.go:404] StartCluster: {Name:pause-959901 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-959901 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServ
erIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-p
rovisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:15:30.139603  211844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:15:30.139652  211844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:15:30.176217  211844 cri.go:89] found id: "dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b"
	I0912 22:15:30.176243  211844 cri.go:89] found id: "47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300"
	I0912 22:15:30.176250  211844 cri.go:89] found id: "50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68"
	I0912 22:15:30.176258  211844 cri.go:89] found id: "d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405"
	I0912 22:15:30.176264  211844 cri.go:89] found id: "1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8"
	I0912 22:15:30.176270  211844 cri.go:89] found id: "35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7"
	I0912 22:15:30.176277  211844 cri.go:89] found id: "8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb"
	I0912 22:15:30.176283  211844 cri.go:89] found id: "fc4a6dc91ebcbdba700063cc40f097af21445cc91c8a0c0807c14fc1c3b0b399"
	I0912 22:15:30.176290  211844 cri.go:89] found id: "b42183c9f32028d7498278c35ac3112f038e35ab4327b61471b671784b03209a"
	I0912 22:15:30.176306  211844 cri.go:89] found id: "5ca54c51180cd54b01dadec7994a73d0cf06cf03a079022e23e909b961683c0e"
	I0912 22:15:30.176317  211844 cri.go:89] found id: "6ef077c3d66c4e13543b63b64d6af6a7f7dad192a265078916405a271d57d6bc"
	I0912 22:15:30.176325  211844 cri.go:89] found id: "108c36e44c53fcf9afbe2ebe80393730d5fd9cf5e26665a1b9077762802f5909"
	I0912 22:15:30.176337  211844 cri.go:89] found id: "02fa783e56fd0c0166b74f17ba40f9416758e2ed36be33426f9cf811a0d4379d"
	I0912 22:15:30.176351  211844 cri.go:89] found id: "883325a90b053cff496a6227933c12e8808c5ef706502f5ac14041b31f70e709"
	I0912 22:15:30.176359  211844 cri.go:89] found id: ""
	I0912 22:15:30.176407  211844 ssh_runner.go:195] Run: sudo runc list -f json
	I0912 22:15:30.207532  211844 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"02fa783e56fd0c0166b74f17ba40f9416758e2ed36be33426f9cf811a0d4379d","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/02fa783e56fd0c0166b74f17ba40f9416758e2ed36be33426f9cf811a0d4379d/userdata","rootfs":"/var/lib/containers/storage/overlay/2bc70f8f26fcb9cd7f69facd331179ec2611050f67a6702730e027fc93a1152a/merged","created":"2023-09-12T22:14:44.021546199Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a934d890","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a934d890\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminatio
nMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"02fa783e56fd0c0166b74f17ba40f9416758e2ed36be33426f9cf811a0d4379d","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:14:43.927430939Z","io.kubernetes.cri-o.Image":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-959901\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8b8fe93e5e1210327dae6d6dea9b37c9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-959901_8b8fe93e5e1210327dae6d6dea9b37c9/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kube
rnetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/2bc70f8f26fcb9cd7f69facd331179ec2611050f67a6702730e027fc93a1152a/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-959901_kube-system_8b8fe93e5e1210327dae6d6dea9b37c9_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/1dced900bc3c425d5e12f093bcd24e784131612eb4f48db6b8e75fcde6fce6a4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1dced900bc3c425d5e12f093bcd24e784131612eb4f48db6b8e75fcde6fce6a4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-959901_kube-system_8b8fe93e5e1210327dae6d6dea9b37c9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8b8fe93e5e1210327dae6d6dea9b37c9/containers/kube-apiserver/d6e211fe\",\"readonly\":false,\"propagation\":0,\"selinux_re
label\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8b8fe93e5e1210327dae6d6dea9b37c9/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-959901","io.k
ubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8b8fe93e5e1210327dae6d6dea9b37c9","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.94.2:8443","kubernetes.io/config.hash":"8b8fe93e5e1210327dae6d6dea9b37c9","kubernetes.io/config.seen":"2023-09-12T22:14:43.392014552Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"108c36e44c53fcf9afbe2ebe80393730d5fd9cf5e26665a1b9077762802f5909","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/108c36e44c53fcf9afbe2ebe80393730d5fd9cf5e26665a1b9077762802f5909/userdata","rootfs":"/var/lib/containers/storage/overlay/b3a4d06e9e6a058ac5589e76581fc9182aff677dbc0cb2f465096dca2d4d8c0a/merged","created":"2023-09-12T22:14:44.028284827Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.conta
iner.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"108c36e44c53fcf9afbe2ebe80393730d5fd9cf5e26665a1b9077762802f5909","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:14:43.928664215Z","io.kubernetes.cri-o.Image":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kub
e-scheduler-pause-959901\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5245b2d1aaf8760442ffadc85a404fa1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-959901_5245b2d1aaf8760442ffadc85a404fa1/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b3a4d06e9e6a058ac5589e76581fc9182aff677dbc0cb2f465096dca2d4d8c0a/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-959901_kube-system_5245b2d1aaf8760442ffadc85a404fa1_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e95e4f41c7e2697efcfeaa62556ff4e3e0aeab70db839630c41bbc040785bf3b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e95e4f41c7e2697efcfeaa62556ff4e3e0aeab70db839630c41bbc040785bf3b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-959901_kube-system_5245b2d1aaf8760442ffadc85a404fa1_0","io.kubernetes.cri-o.SeccompProfilePa
th":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5245b2d1aaf8760442ffadc85a404fa1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5245b2d1aaf8760442ffadc85a404fa1/containers/kube-scheduler/b3ce403b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-959901","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5245b2d1aaf8760442ffadc85a404fa1","kubernetes.io/config.hash":"5245b2d1aaf8760442ffadc85a404fa1","kubernetes.io/config.seen":"2023-09-1
2T22:14:43.392021717Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8/userdata","rootfs":"/var/lib/containers/storage/overlay/89358bf23dd769f53123770834d8ed188f3c3f6fb4ba20aea4c97a34e625af9d/merged","created":"2023-09-12T22:15:18.966551829Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"61920a46","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"61920a46\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.co
ntainer.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:18.741045635Z","io.kubernetes.cri-o.Image":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.28.1","io.kubernetes.cri-o.ImageRef":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-pause-959901\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5245b2d1aaf8760442ffadc85a404fa1\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-pause-959901_5245b2d1aaf8760442ffadc85a404fa1/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-sche
duler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/89358bf23dd769f53123770834d8ed188f3c3f6fb4ba20aea4c97a34e625af9d/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-pause-959901_kube-system_5245b2d1aaf8760442ffadc85a404fa1_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/e95e4f41c7e2697efcfeaa62556ff4e3e0aeab70db839630c41bbc040785bf3b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"e95e4f41c7e2697efcfeaa62556ff4e3e0aeab70db839630c41bbc040785bf3b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-pause-959901_kube-system_5245b2d1aaf8760442ffadc85a404fa1_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5245b2d1aaf8760442ffadc85a404fa1/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relab
el\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5245b2d1aaf8760442ffadc85a404fa1/containers/kube-scheduler/e2577ad9\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-pause-959901","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5245b2d1aaf8760442ffadc85a404fa1","kubernetes.io/config.hash":"5245b2d1aaf8760442ffadc85a404fa1","kubernetes.io/config.seen":"2023-09-12T22:14:43.392021717Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7
/userdata","rootfs":"/var/lib/containers/storage/overlay/1e2ba763fc445f16e92029ebaef91efe6d7d3ec373733e5bfc8aca9eadccb866/merged","created":"2023-09-12T22:15:18.934148399Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f2bcac13","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f2bcac13\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"c
ontainerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:18.725165803Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-mtzsr\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ebc
e215d-39b5-449a-9c8f-67054a18fabf\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-mtzsr_ebce215d-39b5-449a-9c8f-67054a18fabf/coredns/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/1e2ba763fc445f16e92029ebaef91efe6d7d3ec373733e5bfc8aca9eadccb866/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-mtzsr_kube-system_ebce215d-39b5-449a-9c8f-67054a18fabf_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/41ecaa7d266ffa580bd52eec24e048623e97a1bece2d119dc2ef194abaa56238/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"41ecaa7d266ffa580bd52eec24e048623e97a1bece2d119dc2ef194abaa56238","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-mtzsr_kube-system_ebce215d-39b5-449a-9c8f-67054a18fabf_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TT
Y":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/ebce215d-39b5-449a-9c8f-67054a18fabf/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ebce215d-39b5-449a-9c8f-67054a18fabf/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ebce215d-39b5-449a-9c8f-67054a18fabf/containers/coredns/19e8c1ca\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/ebce215d-39b5-449a-9c8f-67054a18fabf/volumes/kubernetes.io~projected/kube-api-access-dvggc\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-mtzsr","io.kubernetes.pod.namespace":"kube-syst
em","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ebce215d-39b5-449a-9c8f-67054a18fabf","kubernetes.io/config.seen":"2023-09-12T22:15:05.297480642Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300/userdata","rootfs":"/var/lib/containers/storage/overlay/0819922ce10ed08300ed79b3ef67798ab459b92317776c807833c2d6c448b82f/merged","created":"2023-09-12T22:15:19.04268205Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2ab44313","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2ab44313\",
\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:18.833622381Z","io.kubernetes.cri-o.Image":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-959901\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5836a4259bbb435443eb176407c59680\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-959901_5836a42
59bbb435443eb176407c59680/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0819922ce10ed08300ed79b3ef67798ab459b92317776c807833c2d6c448b82f/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-959901_kube-system_5836a4259bbb435443eb176407c59680_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/21f89d0b5ddd2a618ee0a4de8a3660b2896edc80b96383f50cc8ec670ac8a54d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"21f89d0b5ddd2a618ee0a4de8a3660b2896edc80b96383f50cc8ec670ac8a54d","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-959901_kube-system_5836a4259bbb435443eb176407c59680_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5836a4259bbb435443eb176407c59680/etc-hosts\",\
"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5836a4259bbb435443eb176407c59680/containers/etcd/d0139ee7\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-pause-959901","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5836a4259bbb435443eb176407c59680","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.94.2:2379","kubernetes.io/config.hash":"5836a4259bbb435443eb176407c59680","kubernetes.io/config.seen":"2023-09-12T22:14:43.392023270Z","kubernetes.io/config.source":"file"}
,"owner":"root"},{"ociVersion":"1.0.2-dev","id":"50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68/userdata","rootfs":"/var/lib/containers/storage/overlay/b9e5d9f02126ce89bc4e899b97718eac3304950ee50c7ea783932804f53fa11b/merged","created":"2023-09-12T22:15:19.129693515Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.ku
bernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:18.76224087Z","io.kubernetes.cri-o.Image":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-959901\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d7938fb1bd014274f74a92e537e31344\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-959901_d7938fb1bd014274f74a92e537e31344/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-
manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b9e5d9f02126ce89bc4e899b97718eac3304950ee50c7ea783932804f53fa11b/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-959901_kube-system_d7938fb1bd014274f74a92e537e31344_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3f64d56d033b3ab18b46e669303ad0ff1b9b7157e40c1e8215ae1f4933cf002b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3f64d56d033b3ab18b46e669303ad0ff1b9b7157e40c1e8215ae1f4933cf002b","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-959901_kube-system_d7938fb1bd014274f74a92e537e31344_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":f
alse},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d7938fb1bd014274f74a92e537e31344/containers/kube-controller-manager/994eb146\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d7938fb1bd014274f74a92e537e31344/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"sel
inux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-959901","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d7938fb1bd014274f74a92e537e31344","kubernetes.io/config.hash":"d7938fb1bd014274f74a92e537e31344","kubernetes.io/config.seen":"2023-09-12T22:14:43.392020445Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5ca54c51180cd54b01dadec7994a73d0cf06cf03a079022e23e909b961683c0e","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/5ca54c51180cd54b01dadec7994a73d0cf06cf03a079022e
23e909b961683c0e/userdata","rootfs":"/var/lib/containers/storage/overlay/0f88faf0938e433bedc679b4a259658419ab1e47dd969cfc6534f2ceb7b6ce57/merged","created":"2023-09-12T22:15:02.981522649Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5388b6af","io.kubernetes.container.name":"kube-proxy","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5388b6af\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5ca54c51180cd54b01dadec7994a73d0cf06cf03a079022e23e909b961683c0e","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:02.925218432
Z","io.kubernetes.cri-o.Image":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-z2hh7\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9a0e46a6-3795-4959-8b48-576a02252969\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-z2hh7_9a0e46a6-3795-4959-8b48-576a02252969/kube-proxy/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/0f88faf0938e433bedc679b4a259658419ab1e47dd969cfc6534f2ceb7b6ce57/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-z2hh7_kube-system_9a0e46a6-3795-4959-8b48-576a02252969_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-contai
ners/c3c32f7ab3305aec24628e3a50810c4f9d3a77f0be5bb3e1c53453b1e8a1a550/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c3c32f7ab3305aec24628e3a50810c4f9d3a77f0be5bb3e1c53453b1e8a1a550","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-z2hh7_kube-system_9a0e46a6-3795-4959-8b48-576a02252969_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9a0e46a6-3795-4959-8b48-576a02252969/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/po
ds/9a0e46a6-3795-4959-8b48-576a02252969/containers/kube-proxy/a9634c6a\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/9a0e46a6-3795-4959-8b48-576a02252969/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9a0e46a6-3795-4959-8b48-576a02252969/volumes/kubernetes.io~projected/kube-api-access-mtz4x\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-z2hh7","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9a0e46a6-3795-4959-8b48-576a02252969","kubernetes.io/config.seen":"2023-09-12T22:15:02.224518251Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6ef077c3d66c4e13543b63b64d6af6a7f7dad192a26507891
6405a271d57d6bc","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/6ef077c3d66c4e13543b63b64d6af6a7f7dad192a265078916405a271d57d6bc/userdata","rootfs":"/var/lib/containers/storage/overlay/6c81a656a68f300d27300d8b257c41f5d4d5d76f30f12bf18fb80144abf49b22/merged","created":"2023-09-12T22:14:44.038575895Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"b7243b12","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"b7243b12\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"6ef077c3d66c4e13
543b63b64d6af6a7f7dad192a265078916405a271d57d6bc","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:14:43.942331778Z","io.kubernetes.cri-o.Image":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.28.1","io.kubernetes.cri-o.ImageRef":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-pause-959901\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d7938fb1bd014274f74a92e537e31344\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-pause-959901_d7938fb1bd014274f74a92e537e31344/kube-controller-manager/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/6c81a656a68f300
d27300d8b257c41f5d4d5d76f30f12bf18fb80144abf49b22/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-pause-959901_kube-system_d7938fb1bd014274f74a92e537e31344_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3f64d56d033b3ab18b46e669303ad0ff1b9b7157e40c1e8215ae1f4933cf002b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3f64d56d033b3ab18b46e669303ad0ff1b9b7157e40c1e8215ae1f4933cf002b","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-pause-959901_kube-system_d7938fb1bd014274f74a92e537e31344_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d7938fb1bd014274f74a92
e537e31344/containers/kube-controller-manager/bfb9ff13\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d7938fb1bd014274f74a92e537e31344/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share
/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-pause-959901","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d7938fb1bd014274f74a92e537e31344","kubernetes.io/config.hash":"d7938fb1bd014274f74a92e537e31344","kubernetes.io/config.seen":"2023-09-12T22:14:43.392020445Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"883325a90b053cff496a6227933c12e8808c5ef706502f5ac14041b31f70e709","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/883325a90b053cff496a6227933c12e8808c5ef706502f5ac14041b31f70e709/userdata","rootfs":"/var/lib/containers/storage/overlay/d3123217d230453e084677c4e3df8f41be5c55
649afb69e91a9345957232bfc9/merged","created":"2023-09-12T22:14:43.968402246Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"2ab44313","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"2ab44313\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"883325a90b053cff496a6227933c12e8808c5ef706502f5ac14041b31f70e709","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:14:43.907071779Z","io.kubernetes.cri-o.Image":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","io.kubernetes.cri-
o.ImageName":"registry.k8s.io/etcd:3.5.9-0","io.kubernetes.cri-o.ImageRef":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-pause-959901\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5836a4259bbb435443eb176407c59680\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-pause-959901_5836a4259bbb435443eb176407c59680/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/d3123217d230453e084677c4e3df8f41be5c55649afb69e91a9345957232bfc9/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-pause-959901_kube-system_5836a4259bbb435443eb176407c59680_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/21f89d0b5ddd2a618ee0a4de8a3660b2896edc80b96383f50cc8ec670ac8a54d/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"21f89d0b5ddd2a618ee0a4de8a3660b
2896edc80b96383f50cc8ec670ac8a54d","io.kubernetes.cri-o.SandboxName":"k8s_etcd-pause-959901_kube-system_5836a4259bbb435443eb176407c59680_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/5836a4259bbb435443eb176407c59680/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5836a4259bbb435443eb176407c59680/containers/etcd/54117470\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":
false}]","io.kubernetes.pod.name":"etcd-pause-959901","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5836a4259bbb435443eb176407c59680","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.94.2:2379","kubernetes.io/config.hash":"5836a4259bbb435443eb176407c59680","kubernetes.io/config.seen":"2023-09-12T22:14:43.392023270Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb/userdata","rootfs":"/var/lib/containers/storage/overlay/50d5403e28290d76287f61c20e3ded9976397c29191aeb4354a6fff7d3c79df8/merged","created":"2023-09-12T22:15:19.021589464Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"5388b6af","io.kubernetes.container.name":"kube-proxy","io.kubernetes.
container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"5388b6af\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:18.713643346Z","io.kubernetes.cri-o.Image":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-proxy:v1.28.1","io.kubernetes.cri-o.ImageRef":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-
proxy\",\"io.kubernetes.pod.name\":\"kube-proxy-z2hh7\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"9a0e46a6-3795-4959-8b48-576a02252969\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-proxy-z2hh7_9a0e46a6-3795-4959-8b48-576a02252969/kube-proxy/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-proxy\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/50d5403e28290d76287f61c20e3ded9976397c29191aeb4354a6fff7d3c79df8/merged","io.kubernetes.cri-o.Name":"k8s_kube-proxy_kube-proxy-z2hh7_kube-system_9a0e46a6-3795-4959-8b48-576a02252969_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/c3c32f7ab3305aec24628e3a50810c4f9d3a77f0be5bb3e1c53453b1e8a1a550/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"c3c32f7ab3305aec24628e3a50810c4f9d3a77f0be5bb3e1c53453b1e8a1a550","io.kubernetes.cri-o.SandboxName":"k8s_kube-proxy-z2hh7_kube-system_9a0e46a6-3795-4959-8b48-576a02252969_0","io.kubernetes.cri-o.Se
ccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/9a0e46a6-3795-4959-8b48-576a02252969/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/9a0e46a6-3795-4959-8b48-576a02252969/containers/kube-proxy/8995ba81\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/kube-proxy\",\"host_path\":\"/var/lib/kubelet/pods/9a0e46a6-3795-4959-8b48-576a02252969/volumes/kubernetes.io~configmap/kube-proxy\",\"readonly\":true,\"p
ropagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/9a0e46a6-3795-4959-8b48-576a02252969/volumes/kubernetes.io~projected/kube-api-access-mtz4x\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-proxy-z2hh7","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"9a0e46a6-3795-4959-8b48-576a02252969","kubernetes.io/config.seen":"2023-09-12T22:15:02.224518251Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b42183c9f32028d7498278c35ac3112f038e35ab4327b61471b671784b03209a","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/b42183c9f32028d7498278c35ac3112f038e35ab4327b61471b671784b03209a/userdata","rootfs":"/var/lib/containers/storage/overlay/e9cb99aafbd22cf89be837725874e57383f14eb6a71b295842175e5b24fa7e56/merged","created":"2023-09-12T22:15:04.
490579101Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"409dbcb4","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"409dbcb4\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"b42183c9f32028d7498278c35ac3112f038e35ab4327b61471b671784b03209a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:04.44286668Z","io.kubernetes.cri-o.Image":"docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","io.kubernetes.cri-o.ImageName":"docker.io/ki
ndest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-km9nv\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d59bdd92-bd6e-408a-a28a-dbd1255077a8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-km9nv_d59bdd92-bd6e-408a-a28a-dbd1255077a8/kindnet-cni/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/e9cb99aafbd22cf89be837725874e57383f14eb6a71b295842175e5b24fa7e56/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-km9nv_kube-system_d59bdd92-bd6e-408a-a28a-dbd1255077a8_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3e8cefea9c539b6bbbefc269d85e1ae250055aed9dd7af112802e0b2983fa6bd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3e8cefea9c53
9b6bbbefc269d85e1ae250055aed9dd7af112802e0b2983fa6bd","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-km9nv_kube-system_d59bdd92-bd6e-408a-a28a-dbd1255077a8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d59bdd92-bd6e-408a-a28a-dbd1255077a8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d59bdd92-bd6e-408a-a28a-dbd1255077a8/containers/kindnet-cni/ac468fb8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\
"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d59bdd92-bd6e-408a-a28a-dbd1255077a8/volumes/kubernetes.io~projected/kube-api-access-t48bd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-km9nv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d59bdd92-bd6e-408a-a28a-dbd1255077a8","kubernetes.io/config.seen":"2023-09-12T22:15:02.230170668Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405/userdata","rootfs":"/var/lib/containers/storage/overlay/99a881cc57e4536
a6344c49e3af89f50a07639b1376f57a6b4aac287806c508d/merged","created":"2023-09-12T22:15:19.042903233Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"409dbcb4","io.kubernetes.container.name":"kindnet-cni","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"409dbcb4\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:18.750578752Z","io.kubernetes.cri-o.Image":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768
d6c8c18cc","io.kubernetes.cri-o.ImageName":"docker.io/kindest/kindnetd:v20230809-80a64d96","io.kubernetes.cri-o.ImageRef":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kindnet-cni\",\"io.kubernetes.pod.name\":\"kindnet-km9nv\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"d59bdd92-bd6e-408a-a28a-dbd1255077a8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kindnet-km9nv_d59bdd92-bd6e-408a-a28a-dbd1255077a8/kindnet-cni/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kindnet-cni\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/99a881cc57e4536a6344c49e3af89f50a07639b1376f57a6b4aac287806c508d/merged","io.kubernetes.cri-o.Name":"k8s_kindnet-cni_kindnet-km9nv_kube-system_d59bdd92-bd6e-408a-a28a-dbd1255077a8_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/3e8cefea9c539b6bbbefc269d85e1ae250055aed9dd7af112802e0b2983fa6
bd/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"3e8cefea9c539b6bbbefc269d85e1ae250055aed9dd7af112802e0b2983fa6bd","io.kubernetes.cri-o.SandboxName":"k8s_kindnet-km9nv_kube-system_d59bdd92-bd6e-408a-a28a-dbd1255077a8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/run/xtables.lock\",\"host_path\":\"/run/xtables.lock\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/lib/modules\",\"host_path\":\"/lib/modules\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/d59bdd92-bd6e-408a-a28a-dbd1255077a8/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/d59bdd92-bd6e-408a-a28a-dbd1255077a8/containers/kindnet-cni/e206103
8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/cni/net.d\",\"host_path\":\"/etc/cni/net.d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/d59bdd92-bd6e-408a-a28a-dbd1255077a8/volumes/kubernetes.io~projected/kube-api-access-t48bd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kindnet-km9nv","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"d59bdd92-bd6e-408a-a28a-dbd1255077a8","kubernetes.io/config.seen":"2023-09-12T22:15:02.230170668Z","kubernetes.io/config.source":"api"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b/u
serdata","rootfs":"/var/lib/containers/storage/overlay/998b173aaf1f95c028fd7431a0603d47d89ffebede67b24d424b3c35d39fab7b/merged","created":"2023-09-12T22:15:19.126663055Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"a934d890","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"a934d890\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:18.837515946Z","io.kuberne
tes.cri-o.Image":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.28.1","io.kubernetes.cri-o.ImageRef":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-pause-959901\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"8b8fe93e5e1210327dae6d6dea9b37c9\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-pause-959901_8b8fe93e5e1210327dae6d6dea9b37c9/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/998b173aaf1f95c028fd7431a0603d47d89ffebede67b24d424b3c35d39fab7b/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-pause-959901_kube-system_8b8fe93e5e1210327dae6d6dea9b37c9_1","io.kubernetes.cri-o.ResolvPath
":"/run/containers/storage/overlay-containers/1dced900bc3c425d5e12f093bcd24e784131612eb4f48db6b8e75fcde6fce6a4/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"1dced900bc3c425d5e12f093bcd24e784131612eb4f48db6b8e75fcde6fce6a4","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-pause-959901_kube-system_8b8fe93e5e1210327dae6d6dea9b37c9_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/8b8fe93e5e1210327dae6d6dea9b37c9/containers/kube-apiserver/11d651ca\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/8b8fe93e5e1210327dae6d6dea9b37c9/etc-hosts\",\"readonl
y\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-pause-959901","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"8b8fe93e5e1210327dae6d6dea9b37c9","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.94.2:8443","kubernetes.io/config.hash":"8b8fe93e5e1210327dae6d6dea9b
37c9","kubernetes.io/config.seen":"2023-09-12T22:14:43.392014552Z","kubernetes.io/config.source":"file"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fc4a6dc91ebcbdba700063cc40f097af21445cc91c8a0c0807c14fc1c3b0b399","pid":0,"status":"stopped","bundle":"/run/containers/storage/overlay-containers/fc4a6dc91ebcbdba700063cc40f097af21445cc91c8a0c0807c14fc1c3b0b399/userdata","rootfs":"/var/lib/containers/storage/overlay/874c104e2b748b3bef4f3056147c762b3a174632b263e759965ec09572b4cb65/merged","created":"2023-09-12T22:15:05.687601992Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"f2bcac13","io.kubernetes.container.name":"coredns","io.kubernetes.container.ports":"[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.
container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"f2bcac13\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"dns\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"UDP\\\"},{\\\"name\\\":\\\"dns-tcp\\\",\\\"containerPort\\\":53,\\\"protocol\\\":\\\"TCP\\\"},{\\\"name\\\":\\\"metrics\\\",\\\"containerPort\\\":9153,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"fc4a6dc91ebcbdba700063cc40f097af21445cc91c8a0c0807c14fc1c3b0b399","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2023-09-12T22:15:05.661919701Z","io.kubernetes.cri-o.IP.0":"10.244.0.2","io.kubernetes.cri-o.Image":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o
.ImageName":"registry.k8s.io/coredns/coredns:v1.10.1","io.kubernetes.cri-o.ImageRef":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"coredns\",\"io.kubernetes.pod.name\":\"coredns-5dd5756b68-mtzsr\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"ebce215d-39b5-449a-9c8f-67054a18fabf\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_coredns-5dd5756b68-mtzsr_ebce215d-39b5-449a-9c8f-67054a18fabf/coredns/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"coredns\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/874c104e2b748b3bef4f3056147c762b3a174632b263e759965ec09572b4cb65/merged","io.kubernetes.cri-o.Name":"k8s_coredns_coredns-5dd5756b68-mtzsr_kube-system_ebce215d-39b5-449a-9c8f-67054a18fabf_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/41ecaa7d266ffa580bd52eec24e048623e97a1bece2d119dc2ef194abaa56238/userdata/resolv.conf","io.kuber
netes.cri-o.SandboxID":"41ecaa7d266ffa580bd52eec24e048623e97a1bece2d119dc2ef194abaa56238","io.kubernetes.cri-o.SandboxName":"k8s_coredns-5dd5756b68-mtzsr_kube-system_ebce215d-39b5-449a-9c8f-67054a18fabf_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/coredns\",\"host_path\":\"/var/lib/kubelet/pods/ebce215d-39b5-449a-9c8f-67054a18fabf/volumes/kubernetes.io~configmap/config-volume\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/ebce215d-39b5-449a-9c8f-67054a18fabf/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/ebce215d-39b5-449a-9c8f-67054a18fabf/containers/coredns/9785342b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"contai
ner_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/ebce215d-39b5-449a-9c8f-67054a18fabf/volumes/kubernetes.io~projected/kube-api-access-dvggc\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"coredns-5dd5756b68-mtzsr","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"ebce215d-39b5-449a-9c8f-67054a18fabf","kubernetes.io/config.seen":"2023-09-12T22:15:05.297480642Z","kubernetes.io/config.source":"api"},"owner":"root"}]
	I0912 22:15:30.208216  211844 cri.go:126] list returned 14 containers
	I0912 22:15:30.208231  211844 cri.go:129] container: {ID:02fa783e56fd0c0166b74f17ba40f9416758e2ed36be33426f9cf811a0d4379d Status:stopped}
	I0912 22:15:30.208244  211844 cri.go:135] skipping {02fa783e56fd0c0166b74f17ba40f9416758e2ed36be33426f9cf811a0d4379d stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208254  211844 cri.go:129] container: {ID:108c36e44c53fcf9afbe2ebe80393730d5fd9cf5e26665a1b9077762802f5909 Status:stopped}
	I0912 22:15:30.208260  211844 cri.go:135] skipping {108c36e44c53fcf9afbe2ebe80393730d5fd9cf5e26665a1b9077762802f5909 stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208269  211844 cri.go:129] container: {ID:1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8 Status:stopped}
	I0912 22:15:30.208278  211844 cri.go:135] skipping {1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8 stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208290  211844 cri.go:129] container: {ID:35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7 Status:stopped}
	I0912 22:15:30.208298  211844 cri.go:135] skipping {35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7 stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208306  211844 cri.go:129] container: {ID:47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300 Status:stopped}
	I0912 22:15:30.208324  211844 cri.go:135] skipping {47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300 stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208332  211844 cri.go:129] container: {ID:50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68 Status:stopped}
	I0912 22:15:30.208337  211844 cri.go:135] skipping {50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68 stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208344  211844 cri.go:129] container: {ID:5ca54c51180cd54b01dadec7994a73d0cf06cf03a079022e23e909b961683c0e Status:stopped}
	I0912 22:15:30.208349  211844 cri.go:135] skipping {5ca54c51180cd54b01dadec7994a73d0cf06cf03a079022e23e909b961683c0e stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208360  211844 cri.go:129] container: {ID:6ef077c3d66c4e13543b63b64d6af6a7f7dad192a265078916405a271d57d6bc Status:stopped}
	I0912 22:15:30.208372  211844 cri.go:135] skipping {6ef077c3d66c4e13543b63b64d6af6a7f7dad192a265078916405a271d57d6bc stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208383  211844 cri.go:129] container: {ID:883325a90b053cff496a6227933c12e8808c5ef706502f5ac14041b31f70e709 Status:stopped}
	I0912 22:15:30.208393  211844 cri.go:135] skipping {883325a90b053cff496a6227933c12e8808c5ef706502f5ac14041b31f70e709 stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208401  211844 cri.go:129] container: {ID:8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb Status:stopped}
	I0912 22:15:30.208406  211844 cri.go:135] skipping {8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208413  211844 cri.go:129] container: {ID:b42183c9f32028d7498278c35ac3112f038e35ab4327b61471b671784b03209a Status:stopped}
	I0912 22:15:30.208418  211844 cri.go:135] skipping {b42183c9f32028d7498278c35ac3112f038e35ab4327b61471b671784b03209a stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208425  211844 cri.go:129] container: {ID:d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405 Status:stopped}
	I0912 22:15:30.208430  211844 cri.go:135] skipping {d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405 stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208438  211844 cri.go:129] container: {ID:dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b Status:stopped}
	I0912 22:15:30.208446  211844 cri.go:135] skipping {dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208457  211844 cri.go:129] container: {ID:fc4a6dc91ebcbdba700063cc40f097af21445cc91c8a0c0807c14fc1c3b0b399 Status:stopped}
	I0912 22:15:30.208470  211844 cri.go:135] skipping {fc4a6dc91ebcbdba700063cc40f097af21445cc91c8a0c0807c14fc1c3b0b399 stopped}: state = "stopped", want "paused"
	I0912 22:15:30.208514  211844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 22:15:30.216904  211844 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0912 22:15:30.216923  211844 kubeadm.go:636] restartCluster start
	I0912 22:15:30.216970  211844 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0912 22:15:30.225256  211844 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:30.225886  211844 kubeconfig.go:92] found "pause-959901" server: "https://192.168.94.2:8443"
	I0912 22:15:30.226829  211844 kapi.go:59] client config for pause-959901: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 22:15:30.227564  211844 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0912 22:15:30.235460  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:30.235512  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:30.245192  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:30.245213  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:30.245255  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:30.254012  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:30.754665  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:30.754762  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:30.764955  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:31.254499  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:31.254587  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:31.263353  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:31.755754  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:31.755814  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:31.769616  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:32.254179  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:32.254263  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:32.267653  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:32.755071  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:32.755161  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:32.765390  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:33.254825  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:33.254920  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:33.265196  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:33.754811  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:33.754900  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:33.765190  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:34.254767  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:34.254841  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:34.264840  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:34.754207  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:34.754296  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:34.764288  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:35.254445  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:35.254518  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:35.264672  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:35.754219  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:35.754297  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:35.765004  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:36.254548  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:36.254621  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:36.265267  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:36.754888  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:36.754977  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:36.766548  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:37.254736  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:37.254804  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:37.265504  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:37.755046  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:37.755134  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:37.765053  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:38.254554  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:38.254613  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:38.266313  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:38.754883  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:38.754952  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:38.765797  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:39.254338  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:39.254417  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:39.265994  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:39.754204  211844 api_server.go:166] Checking apiserver status ...
	I0912 22:15:39.754291  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0912 22:15:39.765170  211844 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:40.235858  211844 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0912 22:15:40.235883  211844 kubeadm.go:1128] stopping kube-system containers ...
	I0912 22:15:40.235899  211844 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0912 22:15:40.235954  211844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:15:40.275372  211844 cri.go:89] found id: "c48215f38677afd9eb1eac9c278231055b2104f5efa40af0df3b07dde9952f9e"
	I0912 22:15:40.275404  211844 cri.go:89] found id: "547dc8b5257194c7183b06b0608a04605dd9a50f31093513e51cffc696212e83"
	I0912 22:15:40.275410  211844 cri.go:89] found id: "bb8b7ab1358b0c1c296fdf8d6498c75a101663022de35fc9032435d60ea67ac6"
	I0912 22:15:40.275416  211844 cri.go:89] found id: "8caf71e9a85470072c52a58288dd4b14ca4aa7ba679faa64ae9099b4d063ab7e"
	I0912 22:15:40.275421  211844 cri.go:89] found id: "3b24877fff317c769401f6e12bbbde35264392f46957545a2e6c00fca5d730b3"
	I0912 22:15:40.275428  211844 cri.go:89] found id: "dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b"
	I0912 22:15:40.275433  211844 cri.go:89] found id: "47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300"
	I0912 22:15:40.275439  211844 cri.go:89] found id: "50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68"
	I0912 22:15:40.275445  211844 cri.go:89] found id: "d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405"
	I0912 22:15:40.275455  211844 cri.go:89] found id: "1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8"
	I0912 22:15:40.275464  211844 cri.go:89] found id: "35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7"
	I0912 22:15:40.275472  211844 cri.go:89] found id: "8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb"
	I0912 22:15:40.275485  211844 cri.go:89] found id: ""
	I0912 22:15:40.275491  211844 cri.go:234] Stopping containers: [c48215f38677afd9eb1eac9c278231055b2104f5efa40af0df3b07dde9952f9e 547dc8b5257194c7183b06b0608a04605dd9a50f31093513e51cffc696212e83 bb8b7ab1358b0c1c296fdf8d6498c75a101663022de35fc9032435d60ea67ac6 8caf71e9a85470072c52a58288dd4b14ca4aa7ba679faa64ae9099b4d063ab7e 3b24877fff317c769401f6e12bbbde35264392f46957545a2e6c00fca5d730b3 dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b 47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300 50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68 d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405 1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8 35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7 8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb]
	I0912 22:15:40.275548  211844 ssh_runner.go:195] Run: which crictl
	I0912 22:15:40.278965  211844 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 c48215f38677afd9eb1eac9c278231055b2104f5efa40af0df3b07dde9952f9e 547dc8b5257194c7183b06b0608a04605dd9a50f31093513e51cffc696212e83 bb8b7ab1358b0c1c296fdf8d6498c75a101663022de35fc9032435d60ea67ac6 8caf71e9a85470072c52a58288dd4b14ca4aa7ba679faa64ae9099b4d063ab7e 3b24877fff317c769401f6e12bbbde35264392f46957545a2e6c00fca5d730b3 dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b 47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300 50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68 d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405 1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8 35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7 8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb
	W0912 22:15:41.001861  211844 kubeadm.go:689] Failed to stop kube-system containers: port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 c48215f38677afd9eb1eac9c278231055b2104f5efa40af0df3b07dde9952f9e 547dc8b5257194c7183b06b0608a04605dd9a50f31093513e51cffc696212e83 bb8b7ab1358b0c1c296fdf8d6498c75a101663022de35fc9032435d60ea67ac6 8caf71e9a85470072c52a58288dd4b14ca4aa7ba679faa64ae9099b4d063ab7e 3b24877fff317c769401f6e12bbbde35264392f46957545a2e6c00fca5d730b3 dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b 47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300 50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68 d11530320f1d5d235ca34d5d7f6c8329ddf766a713e502ccaa2b9c2b7b5ef405 1db222ed5f83da57b826a1155b69e760609b73f067fa323284c68565272853b8 35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7 8ed41910dffa4aaa5fd0777af70405e2ab36043b6db8f2c198f6b56f2614f9bb: Process exited with status 1
	stdout:
	c48215f38677afd9eb1eac9c278231055b2104f5efa40af0df3b07dde9952f9e
	547dc8b5257194c7183b06b0608a04605dd9a50f31093513e51cffc696212e83
	bb8b7ab1358b0c1c296fdf8d6498c75a101663022de35fc9032435d60ea67ac6
	8caf71e9a85470072c52a58288dd4b14ca4aa7ba679faa64ae9099b4d063ab7e
	3b24877fff317c769401f6e12bbbde35264392f46957545a2e6c00fca5d730b3
	dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b
	47daeda8620b89a0ac74a0b4d0b212a0c85eb6b1a1e70c9529f8620fa88c6300
	
	stderr:
	E0912 22:15:40.998991    3831 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68\": container with ID starting with 50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68 not found: ID does not exist" containerID="50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68"
	time="2023-09-12T22:15:40Z" level=fatal msg="stopping the container \"50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68\": rpc error: code = NotFound desc = could not find container \"50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68\": container with ID starting with 50cfb782e5613e21e59d7d49c22e3cd93728083ab8cb333511a659286255aa68 not found: ID does not exist"
	I0912 22:15:41.001931  211844 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0912 22:15:41.080486  211844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 22:15:41.090846  211844 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Sep 12 22:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Sep 12 22:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1987 Sep 12 22:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5592 Sep 12 22:14 /etc/kubernetes/scheduler.conf
	
	I0912 22:15:41.090921  211844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0912 22:15:41.101552  211844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0912 22:15:41.110597  211844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0912 22:15:41.119260  211844 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:41.119312  211844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0912 22:15:41.128686  211844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0912 22:15:41.138112  211844 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0912 22:15:41.138160  211844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0912 22:15:41.147733  211844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 22:15:41.158435  211844 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0912 22:15:41.158464  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 22:15:41.246424  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 22:15:42.137956  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0912 22:15:42.322635  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 22:15:42.385401  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0912 22:15:42.539275  211844 api_server.go:52] waiting for apiserver process to appear ...
	I0912 22:15:42.539348  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:15:42.555150  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:15:43.065631  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:15:43.566004  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:15:43.625397  211844 api_server.go:72] duration metric: took 1.08612951s to wait for apiserver process to appear ...
	I0912 22:15:43.625424  211844 api_server.go:88] waiting for apiserver healthz status ...
	I0912 22:15:43.625443  211844 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0912 22:15:46.263033  211844 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0912 22:15:46.263064  211844 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0912 22:15:46.263077  211844 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0912 22:15:46.428576  211844 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0912 22:15:46.428629  211844 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0912 22:15:46.929332  211844 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0912 22:15:46.933580  211844 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0912 22:15:46.933605  211844 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0912 22:15:47.429769  211844 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0912 22:15:47.435090  211844 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0912 22:15:47.442938  211844 api_server.go:141] control plane version: v1.28.1
	I0912 22:15:47.442964  211844 api_server.go:131] duration metric: took 3.817534519s to wait for apiserver health ...
	I0912 22:15:47.442973  211844 cni.go:84] Creating CNI manager for ""
	I0912 22:15:47.442978  211844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 22:15:47.444720  211844 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0912 22:15:47.446089  211844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 22:15:47.450909  211844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0912 22:15:47.450927  211844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 22:15:47.470449  211844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 22:15:48.164407  211844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:15:48.174574  211844 system_pods.go:59] 7 kube-system pods found
	I0912 22:15:48.174604  211844 system_pods.go:61] "coredns-5dd5756b68-mtzsr" [ebce215d-39b5-449a-9c8f-67054a18fabf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 22:15:48.174612  211844 system_pods.go:61] "etcd-pause-959901" [8bc25b38-213d-4e32-a67c-455ecf7c8b01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 22:15:48.174621  211844 system_pods.go:61] "kindnet-km9nv" [d59bdd92-bd6e-408a-a28a-dbd1255077a8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0912 22:15:48.174629  211844 system_pods.go:61] "kube-apiserver-pause-959901" [6c258963-d39e-43c1-99fc-23e16363ad27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 22:15:48.174636  211844 system_pods.go:61] "kube-controller-manager-pause-959901" [08bdf00b-3dde-49a2-9182-56c41bcdf5e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 22:15:48.174643  211844 system_pods.go:61] "kube-proxy-z2hh7" [9a0e46a6-3795-4959-8b48-576a02252969] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 22:15:48.174651  211844 system_pods.go:61] "kube-scheduler-pause-959901" [704134f0-db48-4df5-a579-29ec78d00c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 22:15:48.174659  211844 system_pods.go:74] duration metric: took 10.226101ms to wait for pod list to return data ...
	I0912 22:15:48.174668  211844 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:15:48.177742  211844 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:15:48.177765  211844 node_conditions.go:123] node cpu capacity is 8
	I0912 22:15:48.177774  211844 node_conditions.go:105] duration metric: took 3.098914ms to run NodePressure ...
	I0912 22:15:48.177794  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 22:15:48.544645  211844 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0912 22:15:48.549667  211844 kubeadm.go:787] kubelet initialised
	I0912 22:15:48.549695  211844 kubeadm.go:788] duration metric: took 5.019167ms waiting for restarted kubelet to initialise ...
	I0912 22:15:48.549705  211844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:15:48.555832  211844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:15:50.631682  211844 pod_ready.go:102] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:52.632298  211844 pod_ready.go:102] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:54.633423  211844 pod_ready.go:102] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:56.678849  211844 pod_ready.go:92] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"True"
	I0912 22:15:56.678872  211844 pod_ready.go:81] duration metric: took 8.123020603s waiting for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:15:56.678882  211844 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:15:58.694156  211844 pod_ready.go:102] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"False"
	I0912 22:16:01.194368  211844 pod_ready.go:102] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"False"
	I0912 22:16:02.694094  211844 pod_ready.go:92] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.694127  211844 pod_ready.go:81] duration metric: took 6.015238147s waiting for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.694143  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.699044  211844 pod_ready.go:92] pod "kube-apiserver-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.699066  211844 pod_ready.go:81] duration metric: took 4.915199ms waiting for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.699078  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.703925  211844 pod_ready.go:92] pod "kube-controller-manager-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.703944  211844 pod_ready.go:81] duration metric: took 4.859474ms waiting for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.703954  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.708650  211844 pod_ready.go:92] pod "kube-proxy-z2hh7" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.708666  211844 pod_ready.go:81] duration metric: took 4.706239ms waiting for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.708673  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.713488  211844 pod_ready.go:92] pod "kube-scheduler-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.713505  211844 pod_ready.go:81] duration metric: took 4.826823ms waiting for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.713512  211844 pod_ready.go:38] duration metric: took 14.163791242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:02.713528  211844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 22:16:02.720933  211844 ops.go:34] apiserver oom_adj: -16
	I0912 22:16:02.720954  211844 kubeadm.go:640] restartCluster took 32.5040247s
	I0912 22:16:02.720964  211844 kubeadm.go:406] StartCluster complete in 32.581463145s
	I0912 22:16:02.720984  211844 settings.go:142] acquiring lock: {Name:mk27d6c9e2209c1484da49df89f359f1b22a9261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.721056  211844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:16:02.722576  211844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/kubeconfig: {Name:mk41a52745552a5cecc3511e6da68b50fcd6941f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.722870  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 22:16:02.722967  211844 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0912 22:16:02.725245  211844 out.go:177] * Enabled addons: 
	I0912 22:16:02.723128  211844 config.go:182] Loaded profile config "pause-959901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:16:02.723914  211844 kapi.go:59] client config for pause-959901: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 22:16:02.726698  211844 addons.go:502] enable addons completed in 3.731707ms: enabled=[]
	I0912 22:16:02.729845  211844 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-959901" context rescaled to 1 replicas
	I0912 22:16:02.729874  211844 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:16:02.731435  211844 out.go:177] * Verifying Kubernetes components...
	I0912 22:16:02.732822  211844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:16:02.799271  211844 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0912 22:16:02.799264  211844 node_ready.go:35] waiting up to 6m0s for node "pause-959901" to be "Ready" ...
	I0912 22:16:02.891550  211844 node_ready.go:49] node "pause-959901" has status "Ready":"True"
	I0912 22:16:02.891581  211844 node_ready.go:38] duration metric: took 92.274025ms waiting for node "pause-959901" to be "Ready" ...
	I0912 22:16:02.891594  211844 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:03.094209  211844 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.491973  211844 pod_ready.go:92] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:03.491996  211844 pod_ready.go:81] duration metric: took 397.761263ms waiting for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.492009  211844 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.891703  211844 pod_ready.go:92] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:03.891726  211844 pod_ready.go:81] duration metric: took 399.709656ms waiting for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.891739  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.291560  211844 pod_ready.go:92] pod "kube-apiserver-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:04.291593  211844 pod_ready.go:81] duration metric: took 399.843007ms waiting for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.291607  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.691770  211844 pod_ready.go:92] pod "kube-controller-manager-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:04.691795  211844 pod_ready.go:81] duration metric: took 400.178718ms waiting for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.691809  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.091401  211844 pod_ready.go:92] pod "kube-proxy-z2hh7" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:05.091421  211844 pod_ready.go:81] duration metric: took 399.605265ms waiting for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.091435  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.492133  211844 pod_ready.go:92] pod "kube-scheduler-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:05.492156  211844 pod_ready.go:81] duration metric: took 400.714089ms waiting for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.492172  211844 pod_ready.go:38] duration metric: took 2.600567658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:05.492191  211844 api_server.go:52] waiting for apiserver process to appear ...
	I0912 22:16:05.492239  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:16:05.502268  211844 api_server.go:72] duration metric: took 2.772365249s to wait for apiserver process to appear ...
	I0912 22:16:05.502290  211844 api_server.go:88] waiting for apiserver healthz status ...
	I0912 22:16:05.502312  211844 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0912 22:16:05.506460  211844 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0912 22:16:05.507747  211844 api_server.go:141] control plane version: v1.28.1
	I0912 22:16:05.507769  211844 api_server.go:131] duration metric: took 5.470962ms to wait for apiserver health ...
	I0912 22:16:05.508067  211844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:16:05.694102  211844 system_pods.go:59] 7 kube-system pods found
	I0912 22:16:05.694139  211844 system_pods.go:61] "coredns-5dd5756b68-mtzsr" [ebce215d-39b5-449a-9c8f-67054a18fabf] Running
	I0912 22:16:05.694147  211844 system_pods.go:61] "etcd-pause-959901" [8bc25b38-213d-4e32-a67c-455ecf7c8b01] Running
	I0912 22:16:05.694154  211844 system_pods.go:61] "kindnet-km9nv" [d59bdd92-bd6e-408a-a28a-dbd1255077a8] Running
	I0912 22:16:05.694160  211844 system_pods.go:61] "kube-apiserver-pause-959901" [6c258963-d39e-43c1-99fc-23e16363ad27] Running
	I0912 22:16:05.694168  211844 system_pods.go:61] "kube-controller-manager-pause-959901" [08bdf00b-3dde-49a2-9182-56c41bcdf5e6] Running
	I0912 22:16:05.694175  211844 system_pods.go:61] "kube-proxy-z2hh7" [9a0e46a6-3795-4959-8b48-576a02252969] Running
	I0912 22:16:05.694179  211844 system_pods.go:61] "kube-scheduler-pause-959901" [704134f0-db48-4df5-a579-29ec78d00c2b] Running
	I0912 22:16:05.694186  211844 system_pods.go:74] duration metric: took 186.08578ms to wait for pod list to return data ...
	I0912 22:16:05.694197  211844 default_sa.go:34] waiting for default service account to be created ...
	I0912 22:16:05.891256  211844 default_sa.go:45] found service account: "default"
	I0912 22:16:05.891286  211844 default_sa.go:55] duration metric: took 197.076725ms for default service account to be created ...
	I0912 22:16:05.891298  211844 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 22:16:06.093850  211844 system_pods.go:86] 7 kube-system pods found
	I0912 22:16:06.093878  211844 system_pods.go:89] "coredns-5dd5756b68-mtzsr" [ebce215d-39b5-449a-9c8f-67054a18fabf] Running
	I0912 22:16:06.093883  211844 system_pods.go:89] "etcd-pause-959901" [8bc25b38-213d-4e32-a67c-455ecf7c8b01] Running
	I0912 22:16:06.093888  211844 system_pods.go:89] "kindnet-km9nv" [d59bdd92-bd6e-408a-a28a-dbd1255077a8] Running
	I0912 22:16:06.093892  211844 system_pods.go:89] "kube-apiserver-pause-959901" [6c258963-d39e-43c1-99fc-23e16363ad27] Running
	I0912 22:16:06.093896  211844 system_pods.go:89] "kube-controller-manager-pause-959901" [08bdf00b-3dde-49a2-9182-56c41bcdf5e6] Running
	I0912 22:16:06.093901  211844 system_pods.go:89] "kube-proxy-z2hh7" [9a0e46a6-3795-4959-8b48-576a02252969] Running
	I0912 22:16:06.093905  211844 system_pods.go:89] "kube-scheduler-pause-959901" [704134f0-db48-4df5-a579-29ec78d00c2b] Running
	I0912 22:16:06.093912  211844 system_pods.go:126] duration metric: took 202.60896ms to wait for k8s-apps to be running ...
	I0912 22:16:06.093921  211844 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 22:16:06.093960  211844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:16:06.105434  211844 system_svc.go:56] duration metric: took 11.502861ms WaitForService to wait for kubelet.
	I0912 22:16:06.105462  211844 kubeadm.go:581] duration metric: took 3.375565082s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 22:16:06.105484  211844 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:16:06.291989  211844 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:16:06.292012  211844 node_conditions.go:123] node cpu capacity is 8
	I0912 22:16:06.292022  211844 node_conditions.go:105] duration metric: took 186.533943ms to run NodePressure ...
	I0912 22:16:06.292033  211844 start.go:228] waiting for startup goroutines ...
	I0912 22:16:06.292039  211844 start.go:233] waiting for cluster config update ...
	I0912 22:16:06.292047  211844 start.go:242] writing updated cluster config ...
	I0912 22:16:06.292303  211844 ssh_runner.go:195] Run: rm -f paused
	I0912 22:16:06.354421  211844 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0912 22:16:06.357076  211844 out.go:177] * Done! kubectl is now configured to use "pause-959901" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-959901
helpers_test.go:235: (dbg) docker inspect pause-959901:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0",
	        "Created": "2023-09-12T22:14:34.765978332Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203933,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-12T22:14:35.144690774Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0508862d812894c98deaaf3533e6d3386b479df1d249d4410a6247f1f44ad45d",
	        "ResolvConfPath": "/var/lib/docker/containers/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0/hosts",
	        "LogPath": "/var/lib/docker/containers/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0-json.log",
	        "Name": "/pause-959901",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-959901:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-959901",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ab3e9fa699299081fad357f70a8a3aef7943a290da4250dc77d335655b3e7b-init/diff:/var/lib/docker/overlay2/27d59bddd44498ba277aabbca5bbef44e363739d94cbe3a544670a142640c048/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ab3e9fa699299081fad357f70a8a3aef7943a290da4250dc77d335655b3e7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ab3e9fa699299081fad357f70a8a3aef7943a290da4250dc77d335655b3e7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ab3e9fa699299081fad357f70a8a3aef7943a290da4250dc77d335655b3e7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-959901",
	                "Source": "/var/lib/docker/volumes/pause-959901/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-959901",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-959901",
	                "name.minikube.sigs.k8s.io": "pause-959901",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6ab422847be213febd454a05d70f76eeb28d6da2296817e28f819199921667c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f6ab422847be",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-959901": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4411e8f8fd2d",
	                        "pause-959901"
	                    ],
	                    "NetworkID": "ebe095e8c57d41a952ae6f61cf6e3d174e928370aa959f0a328c56aba3e0c643",
	                    "EndpointID": "a50fc63649a796516db7b995cdade53c8aa5a3d5d83d12e972dd9fdac7e29220",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-959901 -n pause-959901
E0912 22:16:06.482935   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-959901 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-959901 logs -n 25: (1.546332638s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo journalctl                       | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo docker                           | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo                                  | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo containerd                       | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo find                             | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo crio                             | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-511142                                       | auto-511142   | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	| start   | -p calico-511142 --memory=3072                       | calico-511142 | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=calico --driver=docker                         |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 22:15:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:15:51.288023  223454 out.go:296] Setting OutFile to fd 1 ...
	I0912 22:15:51.288330  223454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:15:51.288340  223454 out.go:309] Setting ErrFile to fd 2...
	I0912 22:15:51.288348  223454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:15:51.288542  223454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 22:15:51.289142  223454 out.go:303] Setting JSON to false
	I0912 22:15:51.290479  223454 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7099,"bootTime":1694549852,"procs":489,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:15:51.290549  223454 start.go:138] virtualization: kvm guest
	I0912 22:15:51.293259  223454 out.go:177] * [calico-511142] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:15:51.294995  223454 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 22:15:51.296518  223454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:15:51.295113  223454 notify.go:220] Checking for updates...
	I0912 22:15:51.299256  223454 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:15:51.300702  223454 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 22:15:51.302271  223454 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:15:51.303900  223454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:15:51.305737  223454 config.go:182] Loaded profile config "kindnet-511142": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:51.305868  223454 config.go:182] Loaded profile config "kubernetes-upgrade-533888": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:51.306024  223454 config.go:182] Loaded profile config "pause-959901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:51.306128  223454 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 22:15:51.330976  223454 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 22:15:51.331066  223454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:15:51.392574  223454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-09-12 22:15:51.383972676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:15:51.392721  223454 docker.go:294] overlay module found
	I0912 22:15:51.394578  223454 out.go:177] * Using the docker driver based on user configuration
	I0912 22:15:47.446089  211844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 22:15:47.450909  211844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0912 22:15:47.450927  211844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 22:15:47.470449  211844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 22:15:48.164407  211844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:15:48.174574  211844 system_pods.go:59] 7 kube-system pods found
	I0912 22:15:48.174604  211844 system_pods.go:61] "coredns-5dd5756b68-mtzsr" [ebce215d-39b5-449a-9c8f-67054a18fabf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 22:15:48.174612  211844 system_pods.go:61] "etcd-pause-959901" [8bc25b38-213d-4e32-a67c-455ecf7c8b01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 22:15:48.174621  211844 system_pods.go:61] "kindnet-km9nv" [d59bdd92-bd6e-408a-a28a-dbd1255077a8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0912 22:15:48.174629  211844 system_pods.go:61] "kube-apiserver-pause-959901" [6c258963-d39e-43c1-99fc-23e16363ad27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 22:15:48.174636  211844 system_pods.go:61] "kube-controller-manager-pause-959901" [08bdf00b-3dde-49a2-9182-56c41bcdf5e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 22:15:48.174643  211844 system_pods.go:61] "kube-proxy-z2hh7" [9a0e46a6-3795-4959-8b48-576a02252969] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 22:15:48.174651  211844 system_pods.go:61] "kube-scheduler-pause-959901" [704134f0-db48-4df5-a579-29ec78d00c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 22:15:48.174659  211844 system_pods.go:74] duration metric: took 10.226101ms to wait for pod list to return data ...
	I0912 22:15:48.174668  211844 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:15:48.177742  211844 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:15:48.177765  211844 node_conditions.go:123] node cpu capacity is 8
	I0912 22:15:48.177774  211844 node_conditions.go:105] duration metric: took 3.098914ms to run NodePressure ...
	I0912 22:15:48.177794  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 22:15:48.544645  211844 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0912 22:15:48.549667  211844 kubeadm.go:787] kubelet initialised
	I0912 22:15:48.549695  211844 kubeadm.go:788] duration metric: took 5.019167ms waiting for restarted kubelet to initialise ...
	I0912 22:15:48.549705  211844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:15:48.555832  211844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:15:50.631682  211844 pod_ready.go:102] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:51.396065  223454 start.go:298] selected driver: docker
	I0912 22:15:51.396084  223454 start.go:902] validating driver "docker" against <nil>
	I0912 22:15:51.396098  223454 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:15:51.396970  223454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:15:51.447595  223454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-09-12 22:15:51.438663805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:15:51.447752  223454 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 22:15:51.447957  223454 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:15:51.449415  223454 out.go:177] * Using Docker driver with root privileges
	I0912 22:15:51.450698  223454 cni.go:84] Creating CNI manager for "calico"
	I0912 22:15:51.450715  223454 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0912 22:15:51.450727  223454 start_flags.go:321] config:
	{Name:calico-511142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-511142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:15:51.452103  223454 out.go:177] * Starting control plane node calico-511142 in cluster calico-511142
	I0912 22:15:51.453308  223454 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 22:15:51.454634  223454 out.go:177] * Pulling base image ...
	I0912 22:15:51.455952  223454 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:15:51.455981  223454 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 22:15:51.456027  223454 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0912 22:15:51.456043  223454 cache.go:57] Caching tarball of preloaded images
	I0912 22:15:51.456132  223454 preload.go:174] Found /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:15:51.456148  223454 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0912 22:15:51.456266  223454 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/config.json ...
	I0912 22:15:51.456291  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/config.json: {Name:mkb69e099ad8791de986653559089df7dc54b7f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:15:51.472079  223454 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0912 22:15:51.472104  223454 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	I0912 22:15:51.472156  223454 cache.go:195] Successfully downloaded all kic artifacts
	I0912 22:15:51.472207  223454 start.go:365] acquiring machines lock for calico-511142: {Name:mk6e488ee73b47a40a81d830ddbf2a15f85393b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:15:51.472309  223454 start.go:369] acquired machines lock for "calico-511142" in 77.105µs
	I0912 22:15:51.472333  223454 start.go:93] Provisioning new machine with config: &{Name:calico-511142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-511142 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:15:51.472451  223454 start.go:125] createHost starting for "" (driver="docker")
	I0912 22:15:47.786503  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:48.285667  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:48.786043  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:49.286584  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:49.785973  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:50.286117  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:50.785881  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:51.285990  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:51.785886  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:52.286597  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:47.508241  187890 cri.go:89] found id: ""
	I0912 22:15:47.508268  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.508277  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:15:47.508284  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:15:47.508351  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:15:47.550252  187890 cri.go:89] found id: ""
	I0912 22:15:47.550280  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.550290  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:15:47.550298  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:15:47.550353  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:15:47.590302  187890 cri.go:89] found id: ""
	I0912 22:15:47.590330  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.590340  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:15:47.590348  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:15:47.590401  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:15:47.633411  187890 cri.go:89] found id: ""
	I0912 22:15:47.633438  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.633448  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:15:47.633457  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:15:47.633507  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:15:47.672381  187890 cri.go:89] found id: ""
	I0912 22:15:47.672403  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.672410  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:15:47.672419  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:15:47.672433  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:47.716906  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:15:47.716936  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:15:47.747554  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:15:47.747597  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:15:47.797518  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:15:47.797546  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:15:47.901931  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:15:47.901963  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:15:47.919706  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:15:47.919736  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:15:47.993295  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:15:50.494155  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:15:50.494596  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:15:50.494647  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:15:50.494707  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:15:50.591532  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:50.591556  187890 cri.go:89] found id: ""
	I0912 22:15:50.591562  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:15:50.591603  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:15:50.595906  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:15:50.595968  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:15:50.631504  187890 cri.go:89] found id: ""
	I0912 22:15:50.631534  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.631541  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:15:50.631547  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:15:50.631602  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:15:50.666238  187890 cri.go:89] found id: ""
	I0912 22:15:50.666263  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.666270  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:15:50.666276  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:15:50.666318  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:15:50.703457  187890 cri.go:89] found id: ""
	I0912 22:15:50.703488  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.703497  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:15:50.703506  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:15:50.703564  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:15:50.738123  187890 cri.go:89] found id: ""
	I0912 22:15:50.738152  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.738162  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:15:50.738170  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:15:50.738217  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:15:50.771989  187890 cri.go:89] found id: ""
	I0912 22:15:50.772017  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.772029  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:15:50.772037  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:15:50.772091  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:15:50.807556  187890 cri.go:89] found id: ""
	I0912 22:15:50.807590  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.807601  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:15:50.807611  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:15:50.807676  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:15:51.001275  187890 cri.go:89] found id: ""
	I0912 22:15:51.001298  187890 logs.go:284] 0 containers: []
	W0912 22:15:51.001305  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:15:51.001313  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:15:51.001327  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:15:51.093714  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:15:51.093747  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:15:51.154354  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:15:51.154382  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:15:51.219306  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:15:51.219430  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:15:51.219450  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:51.266577  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:15:51.266611  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:15:51.294163  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:15:51.294193  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:15:52.785986  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:53.285896  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:53.785810  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:53.864227  213173 kubeadm.go:1081] duration metric: took 11.913689267s to wait for elevateKubeSystemPrivileges.
	I0912 22:15:53.864263  213173 kubeadm.go:406] StartCluster complete in 22.407904287s
	I0912 22:15:53.864284  213173 settings.go:142] acquiring lock: {Name:mk27d6c9e2209c1484da49df89f359f1b22a9261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:15:53.864358  213173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:15:53.865749  213173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/kubeconfig: {Name:mk41a52745552a5cecc3511e6da68b50fcd6941f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:15:53.865989  213173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 22:15:53.866250  213173 config.go:182] Loaded profile config "kindnet-511142": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:53.866430  213173 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0912 22:15:53.866496  213173 addons.go:69] Setting storage-provisioner=true in profile "kindnet-511142"
	I0912 22:15:53.866518  213173 addons.go:231] Setting addon storage-provisioner=true in "kindnet-511142"
	I0912 22:15:53.866573  213173 host.go:66] Checking if "kindnet-511142" exists ...
	I0912 22:15:53.866641  213173 addons.go:69] Setting default-storageclass=true in profile "kindnet-511142"
	I0912 22:15:53.866661  213173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-511142"
	I0912 22:15:53.866913  213173 cli_runner.go:164] Run: docker container inspect kindnet-511142 --format={{.State.Status}}
	I0912 22:15:53.867082  213173 cli_runner.go:164] Run: docker container inspect kindnet-511142 --format={{.State.Status}}
	I0912 22:15:53.899997  213173 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-511142" context rescaled to 1 replicas
	I0912 22:15:53.900032  213173 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:15:53.902537  213173 out.go:177] * Verifying Kubernetes components...
	I0912 22:15:53.903900  213173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:15:53.905298  213173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:15:53.906646  213173 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 22:15:53.906663  213173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 22:15:53.906723  213173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-511142
	I0912 22:15:53.916740  213173 addons.go:231] Setting addon default-storageclass=true in "kindnet-511142"
	I0912 22:15:53.916783  213173 host.go:66] Checking if "kindnet-511142" exists ...
	I0912 22:15:53.917108  213173 cli_runner.go:164] Run: docker container inspect kindnet-511142 --format={{.State.Status}}
	I0912 22:15:53.932173  213173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/kindnet-511142/id_rsa Username:docker}
	I0912 22:15:53.959298  213173 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 22:15:53.959324  213173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 22:15:53.959377  213173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-511142
	I0912 22:15:53.974952  213173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 22:15:53.980312  213173 node_ready.go:35] waiting up to 15m0s for node "kindnet-511142" to be "Ready" ...
	I0912 22:15:53.984765  213173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/kindnet-511142/id_rsa Username:docker}
	I0912 22:15:54.143524  213173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 22:15:54.157704  213173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 22:15:54.549858  213173 start.go:917] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0912 22:15:55.110224  213173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0912 22:15:51.474397  223454 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0912 22:15:51.474698  223454 start.go:159] libmachine.API.Create for "calico-511142" (driver="docker")
	I0912 22:15:51.474737  223454 client.go:168] LocalClient.Create starting
	I0912 22:15:51.474803  223454 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem
	I0912 22:15:51.474850  223454 main.go:141] libmachine: Decoding PEM data...
	I0912 22:15:51.474870  223454 main.go:141] libmachine: Parsing certificate...
	I0912 22:15:51.474919  223454 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem
	I0912 22:15:51.474940  223454 main.go:141] libmachine: Decoding PEM data...
	I0912 22:15:51.474951  223454 main.go:141] libmachine: Parsing certificate...
	I0912 22:15:51.475255  223454 cli_runner.go:164] Run: docker network inspect calico-511142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 22:15:51.491342  223454 cli_runner.go:211] docker network inspect calico-511142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 22:15:51.491426  223454 network_create.go:281] running [docker network inspect calico-511142] to gather additional debugging logs...
	I0912 22:15:51.491449  223454 cli_runner.go:164] Run: docker network inspect calico-511142
	W0912 22:15:51.507224  223454 cli_runner.go:211] docker network inspect calico-511142 returned with exit code 1
	I0912 22:15:51.507252  223454 network_create.go:284] error running [docker network inspect calico-511142]: docker network inspect calico-511142: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-511142 not found
	I0912 22:15:51.507264  223454 network_create.go:286] output of [docker network inspect calico-511142]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-511142 not found
	
	** /stderr **
	I0912 22:15:51.507321  223454 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:15:51.524491  223454 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-38edbaf277f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:05:77:7e:89} reservation:<nil>}
	I0912 22:15:51.525119  223454 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd1ba5635088 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:dc:7c:21:dd} reservation:<nil>}
	I0912 22:15:51.525906  223454 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb713f90456f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:84:8e:96:6c} reservation:<nil>}
	I0912 22:15:51.526461  223454 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ef86beeb6a57 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:df:5c:b7:a4} reservation:<nil>}
	I0912 22:15:51.527144  223454 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010c4a80}
	I0912 22:15:51.527172  223454 network_create.go:123] attempt to create docker network calico-511142 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0912 22:15:51.527215  223454 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-511142 calico-511142
	I0912 22:15:51.579756  223454 network_create.go:107] docker network calico-511142 192.168.85.0/24 created
	I0912 22:15:51.579797  223454 kic.go:117] calculated static IP "192.168.85.2" for the "calico-511142" container
	I0912 22:15:51.579868  223454 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 22:15:51.596720  223454 cli_runner.go:164] Run: docker volume create calico-511142 --label name.minikube.sigs.k8s.io=calico-511142 --label created_by.minikube.sigs.k8s.io=true
	I0912 22:15:51.614050  223454 oci.go:103] Successfully created a docker volume calico-511142
	I0912 22:15:51.614145  223454 cli_runner.go:164] Run: docker run --rm --name calico-511142-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-511142 --entrypoint /usr/bin/test -v calico-511142:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib
	I0912 22:15:52.122909  223454 oci.go:107] Successfully prepared a docker volume calico-511142
	I0912 22:15:52.122943  223454 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:15:52.122961  223454 kic.go:190] Starting extracting preloaded images to volume ...
	I0912 22:15:52.123016  223454 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-511142:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 22:15:52.632298  211844 pod_ready.go:102] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:54.633423  211844 pod_ready.go:102] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:55.152227  213173 addons.go:502] enable addons completed in 1.28578599s: enabled=[default-storageclass storage-provisioner]
	I0912 22:15:56.027752  213173 node_ready.go:58] node "kindnet-511142" has status "Ready":"False"
	I0912 22:15:53.839193  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:15:53.839594  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:15:53.839630  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:15:53.839676  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:15:53.898981  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:53.899007  187890 cri.go:89] found id: ""
	I0912 22:15:53.899016  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:15:53.899070  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:15:53.903821  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:15:53.903886  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:15:53.979602  187890 cri.go:89] found id: ""
	I0912 22:15:53.979625  187890 logs.go:284] 0 containers: []
	W0912 22:15:53.979635  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:15:53.979643  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:15:53.979690  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:15:54.018553  187890 cri.go:89] found id: ""
	I0912 22:15:54.018581  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.018588  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:15:54.018594  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:15:54.018644  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:15:54.071283  187890 cri.go:89] found id: ""
	I0912 22:15:54.071310  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.071319  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:15:54.071326  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:15:54.071390  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:15:54.109404  187890 cri.go:89] found id: ""
	I0912 22:15:54.109431  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.109441  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:15:54.109448  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:15:54.109495  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:15:54.165398  187890 cri.go:89] found id: ""
	I0912 22:15:54.165423  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.165432  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:15:54.165439  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:15:54.165492  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:15:54.205991  187890 cri.go:89] found id: ""
	I0912 22:15:54.206016  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.206025  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:15:54.206032  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:15:54.206087  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:15:54.266368  187890 cri.go:89] found id: ""
	I0912 22:15:54.266394  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.266404  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:15:54.266427  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:15:54.266446  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:15:54.283894  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:15:54.283926  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:15:54.357665  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:15:54.357746  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:15:54.357770  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:54.403585  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:15:54.403612  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:15:54.434821  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:15:54.434923  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:15:54.480958  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:15:54.480994  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:15:57.092117  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:15:57.092541  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:15:57.092622  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:15:57.092688  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:15:57.125605  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:57.125633  187890 cri.go:89] found id: ""
	I0912 22:15:57.125641  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:15:57.125701  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:15:57.128997  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:15:57.129065  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:15:57.160064  187890 cri.go:89] found id: ""
	I0912 22:15:57.160087  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.160094  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:15:57.160099  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:15:57.160157  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:15:57.192350  187890 cri.go:89] found id: ""
	I0912 22:15:57.192377  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.192387  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:15:57.192394  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:15:57.192437  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:15:57.223488  187890 cri.go:89] found id: ""
	I0912 22:15:57.223513  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.223520  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:15:57.223526  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:15:57.223577  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:15:57.255398  187890 cri.go:89] found id: ""
	I0912 22:15:57.255424  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.255434  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:15:57.255442  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:15:57.255494  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:15:57.288150  187890 cri.go:89] found id: ""
	I0912 22:15:57.288178  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.288185  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:15:57.288190  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:15:57.288232  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:15:57.322252  187890 cri.go:89] found id: ""
	I0912 22:15:57.322275  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.322281  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:15:57.322287  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:15:57.322340  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:15:57.353894  187890 cri.go:89] found id: ""
	I0912 22:15:57.353922  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.353929  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:15:57.353937  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:15:57.353948  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:15:57.435748  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:15:57.435787  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:15:57.454319  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:15:57.454348  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 22:15:57.690890  223454 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-511142:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir: (5.567809565s)
	I0912 22:15:57.690924  223454 kic.go:199] duration metric: took 5.567959 seconds to extract preloaded images to volume
	W0912 22:15:57.691075  223454 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0912 22:15:57.691198  223454 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 22:15:57.747180  223454 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-511142 --name calico-511142 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-511142 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-511142 --network calico-511142 --ip 192.168.85.2 --volume calico-511142:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0912 22:15:58.103350  223454 cli_runner.go:164] Run: docker container inspect calico-511142 --format={{.State.Running}}
	I0912 22:15:58.121597  223454 cli_runner.go:164] Run: docker container inspect calico-511142 --format={{.State.Status}}
	I0912 22:15:58.139925  223454 cli_runner.go:164] Run: docker exec calico-511142 stat /var/lib/dpkg/alternatives/iptables
	I0912 22:15:58.185216  223454 oci.go:144] the created container "calico-511142" has a running status.
	I0912 22:15:58.185247  223454 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa...
	I0912 22:15:58.587587  223454 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 22:15:58.607883  223454 cli_runner.go:164] Run: docker container inspect calico-511142 --format={{.State.Status}}
	I0912 22:15:58.626133  223454 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 22:15:58.626165  223454 kic_runner.go:114] Args: [docker exec --privileged calico-511142 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 22:15:58.729208  223454 cli_runner.go:164] Run: docker container inspect calico-511142 --format={{.State.Status}}
	I0912 22:15:58.751850  223454 machine.go:88] provisioning docker machine ...
	I0912 22:15:58.751891  223454 ubuntu.go:169] provisioning hostname "calico-511142"
	I0912 22:15:58.751944  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:58.779223  223454 main.go:141] libmachine: Using SSH client type: native
	I0912 22:15:58.779752  223454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I0912 22:15:58.779780  223454 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-511142 && echo "calico-511142" | sudo tee /etc/hostname
	I0912 22:15:58.931626  223454 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-511142
	
	I0912 22:15:58.931728  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:58.953164  223454 main.go:141] libmachine: Using SSH client type: native
	I0912 22:15:58.953603  223454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I0912 22:15:58.953638  223454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-511142' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-511142/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-511142' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:15:59.092468  223454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:15:59.092495  223454 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17194-15878/.minikube CaCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17194-15878/.minikube}
	I0912 22:15:59.092525  223454 ubuntu.go:177] setting up certificates
	I0912 22:15:59.092535  223454 provision.go:83] configureAuth start
	I0912 22:15:59.092588  223454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-511142
	I0912 22:15:59.108374  223454 provision.go:138] copyHostCerts
	I0912 22:15:59.108448  223454 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem, removing ...
	I0912 22:15:59.108460  223454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 22:15:59.108536  223454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem (1082 bytes)
	I0912 22:15:59.108661  223454 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem, removing ...
	I0912 22:15:59.108673  223454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 22:15:59.108704  223454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem (1123 bytes)
	I0912 22:15:59.108770  223454 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem, removing ...
	I0912 22:15:59.108780  223454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 22:15:59.108803  223454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem (1679 bytes)
	I0912 22:15:59.108860  223454 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem org=jenkins.calico-511142 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube calico-511142]
	I0912 22:15:59.276858  223454 provision.go:172] copyRemoteCerts
	I0912 22:15:59.276910  223454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:15:59.276942  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:59.293592  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:15:59.393073  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:15:59.415545  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0912 22:15:59.437272  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:15:59.459930  223454 provision.go:86] duration metric: configureAuth took 367.37612ms
	I0912 22:15:59.459962  223454 ubuntu.go:193] setting minikube options for container-runtime
	I0912 22:15:59.460145  223454 config.go:182] Loaded profile config "calico-511142": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:59.460253  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:59.476646  223454 main.go:141] libmachine: Using SSH client type: native
	I0912 22:15:59.476970  223454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I0912 22:15:59.476991  223454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:15:59.703236  223454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:15:59.703259  223454 machine.go:91] provisioned docker machine in 951.381638ms
	I0912 22:15:59.703268  223454 client.go:171] LocalClient.Create took 8.228519867s
	I0912 22:15:59.703287  223454 start.go:167] duration metric: libmachine.API.Create for "calico-511142" took 8.228594993s
	I0912 22:15:59.703293  223454 start.go:300] post-start starting for "calico-511142" (driver="docker")
	I0912 22:15:59.703302  223454 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:15:59.703364  223454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:15:59.703399  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:59.720749  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:15:59.817592  223454 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:15:59.820643  223454 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 22:15:59.820687  223454 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 22:15:59.820706  223454 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 22:15:59.820718  223454 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0912 22:15:59.820731  223454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/addons for local assets ...
	I0912 22:15:59.820791  223454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/files for local assets ...
	I0912 22:15:59.820878  223454 filesync.go:149] local asset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> 226982.pem in /etc/ssl/certs
	I0912 22:15:59.820988  223454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:15:59.828482  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:15:59.849536  223454 start.go:303] post-start completed in 146.230184ms
	I0912 22:15:59.849898  223454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-511142
	I0912 22:15:59.867131  223454 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/config.json ...
	I0912 22:15:59.867360  223454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:15:59.867403  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:59.884236  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:15:59.981061  223454 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 22:15:59.984937  223454 start.go:128] duration metric: createHost completed in 8.512472758s
	I0912 22:15:59.984963  223454 start.go:83] releasing machines lock for "calico-511142", held for 8.512642264s
	I0912 22:15:59.985018  223454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-511142
	I0912 22:16:00.003158  223454 ssh_runner.go:195] Run: cat /version.json
	I0912 22:16:00.003209  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:16:00.003218  223454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:16:00.003271  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:16:00.023617  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:16:00.024000  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:16:00.119919  223454 ssh_runner.go:195] Run: systemctl --version
	I0912 22:16:00.215356  223454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:16:00.356813  223454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:16:00.361476  223454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:16:00.379092  223454 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0912 22:16:00.379183  223454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:16:00.408005  223454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0912 22:16:00.408030  223454 start.go:469] detecting cgroup driver to use...
	I0912 22:16:00.408063  223454 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0912 22:16:00.408105  223454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:16:00.422939  223454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:16:00.433621  223454 docker.go:196] disabling cri-docker service (if available) ...
	I0912 22:16:00.433683  223454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:16:00.445886  223454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:16:00.461741  223454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:16:00.548867  223454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:16:00.633463  223454 docker.go:212] disabling docker service ...
	I0912 22:16:00.633509  223454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:16:00.652794  223454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:16:00.663363  223454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:16:00.737058  223454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:16:00.820777  223454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:16:00.831652  223454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:16:00.846472  223454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0912 22:16:00.846522  223454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:16:00.855442  223454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:16:00.855494  223454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:16:00.864325  223454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:16:00.873760  223454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:16:00.882467  223454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:16:00.890971  223454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:16:00.898251  223454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:16:00.906302  223454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:16:00.981328  223454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:16:01.098268  223454 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:16:01.098336  223454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:16:01.101887  223454 start.go:537] Will wait 60s for crictl version
	I0912 22:16:01.101937  223454 ssh_runner.go:195] Run: which crictl
	I0912 22:16:01.105140  223454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:16:01.137622  223454 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0912 22:16:01.137696  223454 ssh_runner.go:195] Run: crio --version
	I0912 22:16:01.170217  223454 ssh_runner.go:195] Run: crio --version
	I0912 22:16:01.206488  223454 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0912 22:16:01.207796  223454 cli_runner.go:164] Run: docker network inspect calico-511142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:16:01.223787  223454 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0912 22:16:01.227375  223454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:16:01.238220  223454 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:16:01.238273  223454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:15:56.678849  211844 pod_ready.go:92] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"True"
	I0912 22:15:56.678872  211844 pod_ready.go:81] duration metric: took 8.123020603s waiting for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:15:56.678882  211844 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:15:58.694156  211844 pod_ready.go:102] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"False"
	I0912 22:16:01.194368  211844 pod_ready.go:102] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:58.027941  213173 node_ready.go:58] node "kindnet-511142" has status "Ready":"False"
	I0912 22:15:59.028166  213173 node_ready.go:49] node "kindnet-511142" has status "Ready":"True"
	I0912 22:15:59.028200  213173 node_ready.go:38] duration metric: took 5.047863011s waiting for node "kindnet-511142" to be "Ready" ...
	I0912 22:15:59.028212  213173 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:15:59.038795  213173 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-k62tg" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.060396  213173 pod_ready.go:92] pod "coredns-5dd5756b68-k62tg" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.060423  213173 pod_ready.go:81] duration metric: took 1.02159687s waiting for pod "coredns-5dd5756b68-k62tg" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.060436  213173 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.065729  213173 pod_ready.go:92] pod "etcd-kindnet-511142" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.065751  213173 pod_ready.go:81] duration metric: took 5.308736ms waiting for pod "etcd-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.065763  213173 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.125812  213173 pod_ready.go:92] pod "kube-apiserver-kindnet-511142" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.125850  213173 pod_ready.go:81] duration metric: took 60.081242ms waiting for pod "kube-apiserver-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.125860  213173 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.228806  213173 pod_ready.go:92] pod "kube-controller-manager-kindnet-511142" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.228830  213173 pod_ready.go:81] duration metric: took 102.962871ms waiting for pod "kube-controller-manager-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.228843  213173 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-nwvr2" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.628765  213173 pod_ready.go:92] pod "kube-proxy-nwvr2" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.628788  213173 pod_ready.go:81] duration metric: took 399.937625ms waiting for pod "kube-proxy-nwvr2" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.628797  213173 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:01.027279  213173 pod_ready.go:92] pod "kube-scheduler-kindnet-511142" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:01.027299  213173 pod_ready.go:81] duration metric: took 398.495291ms waiting for pod "kube-scheduler-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:01.027308  213173 pod_ready.go:38] duration metric: took 1.999084417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:01.027322  213173 api_server.go:52] waiting for apiserver process to appear ...
	I0912 22:16:01.027364  213173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:16:01.038523  213173 api_server.go:72] duration metric: took 7.138463939s to wait for apiserver process to appear ...
	I0912 22:16:01.038548  213173 api_server.go:88] waiting for apiserver healthz status ...
	I0912 22:16:01.038566  213173 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0912 22:16:01.042683  213173 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0912 22:16:01.043753  213173 api_server.go:141] control plane version: v1.28.1
	I0912 22:16:01.043776  213173 api_server.go:131] duration metric: took 5.220452ms to wait for apiserver health ...
	I0912 22:16:01.043789  213173 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:16:01.230750  213173 system_pods.go:59] 8 kube-system pods found
	I0912 22:16:01.230774  213173 system_pods.go:61] "coredns-5dd5756b68-k62tg" [93ad447d-f782-4de2-845a-ecf4d1dc614e] Running
	I0912 22:16:01.230779  213173 system_pods.go:61] "etcd-kindnet-511142" [c5b7e8fa-05b6-435e-a041-5e62ebc70550] Running
	I0912 22:16:01.230784  213173 system_pods.go:61] "kindnet-rm5qw" [18b6f27d-5f3d-4d34-9686-d24bb3d27c25] Running
	I0912 22:16:01.230788  213173 system_pods.go:61] "kube-apiserver-kindnet-511142" [09bf7a95-529c-4d2f-aad2-2f1736da3202] Running
	I0912 22:16:01.230792  213173 system_pods.go:61] "kube-controller-manager-kindnet-511142" [0687342a-6101-4597-a611-627efc1ebac2] Running
	I0912 22:16:01.230796  213173 system_pods.go:61] "kube-proxy-nwvr2" [66707d9a-d499-449d-acb3-166500397ddd] Running
	I0912 22:16:01.230799  213173 system_pods.go:61] "kube-scheduler-kindnet-511142" [c22d8c1a-0139-4eb2-8e38-72ae60604c19] Running
	I0912 22:16:01.230803  213173 system_pods.go:61] "storage-provisioner" [447aa53e-da19-44e7-9b34-eb37f75c156e] Running
	I0912 22:16:01.230808  213173 system_pods.go:74] duration metric: took 187.013363ms to wait for pod list to return data ...
	I0912 22:16:01.230819  213173 default_sa.go:34] waiting for default service account to be created ...
	I0912 22:16:01.427800  213173 default_sa.go:45] found service account: "default"
	I0912 22:16:01.427828  213173 default_sa.go:55] duration metric: took 197.002262ms for default service account to be created ...
	I0912 22:16:01.427841  213173 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 22:16:01.631451  213173 system_pods.go:86] 8 kube-system pods found
	I0912 22:16:01.631481  213173 system_pods.go:89] "coredns-5dd5756b68-k62tg" [93ad447d-f782-4de2-845a-ecf4d1dc614e] Running
	I0912 22:16:01.631489  213173 system_pods.go:89] "etcd-kindnet-511142" [c5b7e8fa-05b6-435e-a041-5e62ebc70550] Running
	I0912 22:16:01.631496  213173 system_pods.go:89] "kindnet-rm5qw" [18b6f27d-5f3d-4d34-9686-d24bb3d27c25] Running
	I0912 22:16:01.631503  213173 system_pods.go:89] "kube-apiserver-kindnet-511142" [09bf7a95-529c-4d2f-aad2-2f1736da3202] Running
	I0912 22:16:01.631510  213173 system_pods.go:89] "kube-controller-manager-kindnet-511142" [0687342a-6101-4597-a611-627efc1ebac2] Running
	I0912 22:16:01.631522  213173 system_pods.go:89] "kube-proxy-nwvr2" [66707d9a-d499-449d-acb3-166500397ddd] Running
	I0912 22:16:01.631532  213173 system_pods.go:89] "kube-scheduler-kindnet-511142" [c22d8c1a-0139-4eb2-8e38-72ae60604c19] Running
	I0912 22:16:01.631539  213173 system_pods.go:89] "storage-provisioner" [447aa53e-da19-44e7-9b34-eb37f75c156e] Running
	I0912 22:16:01.631550  213173 system_pods.go:126] duration metric: took 203.7028ms to wait for k8s-apps to be running ...
	I0912 22:16:01.631562  213173 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 22:16:01.631618  213173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:16:01.642347  213173 system_svc.go:56] duration metric: took 10.772442ms WaitForService to wait for kubelet.
	I0912 22:16:01.642369  213173 kubeadm.go:581] duration metric: took 7.742318263s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 22:16:01.642385  213173 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:16:01.827823  213173 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:16:01.827843  213173 node_conditions.go:123] node cpu capacity is 8
	I0912 22:16:01.827854  213173 node_conditions.go:105] duration metric: took 185.463681ms to run NodePressure ...
	I0912 22:16:01.827864  213173 start.go:228] waiting for startup goroutines ...
	I0912 22:16:01.827870  213173 start.go:233] waiting for cluster config update ...
	I0912 22:16:01.827880  213173 start.go:242] writing updated cluster config ...
	I0912 22:16:01.828111  213173 ssh_runner.go:195] Run: rm -f paused
	I0912 22:16:01.887836  213173 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0912 22:16:01.890220  213173 out.go:177] * Done! kubectl is now configured to use "kindnet-511142" cluster and "default" namespace by default
	W0912 22:15:57.510643  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:15:57.510673  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:15:57.510690  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:57.563318  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:15:57.563349  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:15:57.590766  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:15:57.590796  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:16:00.126306  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:16:00.126679  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:16:00.126726  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:16:00.126775  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:16:00.158629  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:16:00.158658  187890 cri.go:89] found id: ""
	I0912 22:16:00.158667  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:16:00.158716  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:16:00.161961  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:16:00.162021  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:16:00.194254  187890 cri.go:89] found id: ""
	I0912 22:16:00.194278  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.194286  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:16:00.194294  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:16:00.194350  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:16:00.226449  187890 cri.go:89] found id: ""
	I0912 22:16:00.226475  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.226484  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:16:00.226492  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:16:00.226550  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:16:00.265385  187890 cri.go:89] found id: ""
	I0912 22:16:00.265407  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.265419  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:16:00.265426  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:16:00.265470  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:16:00.297646  187890 cri.go:89] found id: ""
	I0912 22:16:00.297675  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.297687  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:16:00.297696  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:16:00.297749  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:16:00.329347  187890 cri.go:89] found id: ""
	I0912 22:16:00.329370  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.329376  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:16:00.329383  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:16:00.329426  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:16:00.362045  187890 cri.go:89] found id: ""
	I0912 22:16:00.362066  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.362076  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:16:00.362083  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:16:00.362131  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:16:00.399017  187890 cri.go:89] found id: ""
	I0912 22:16:00.399044  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.399054  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:16:00.399064  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:16:00.399075  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:16:00.426030  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:16:00.426060  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:16:00.466834  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:16:00.466855  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:16:00.573423  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:16:00.573457  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:16:00.592387  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:16:00.592425  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:16:00.652225  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:16:00.652248  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:16:00.652259  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:16:02.694094  211844 pod_ready.go:92] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.694127  211844 pod_ready.go:81] duration metric: took 6.015238147s waiting for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.694143  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.699044  211844 pod_ready.go:92] pod "kube-apiserver-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.699066  211844 pod_ready.go:81] duration metric: took 4.915199ms waiting for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.699078  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.703925  211844 pod_ready.go:92] pod "kube-controller-manager-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.703944  211844 pod_ready.go:81] duration metric: took 4.859474ms waiting for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.703954  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.708650  211844 pod_ready.go:92] pod "kube-proxy-z2hh7" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.708666  211844 pod_ready.go:81] duration metric: took 4.706239ms waiting for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.708673  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.713488  211844 pod_ready.go:92] pod "kube-scheduler-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.713505  211844 pod_ready.go:81] duration metric: took 4.826823ms waiting for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.713512  211844 pod_ready.go:38] duration metric: took 14.163791242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:02.713528  211844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 22:16:02.720933  211844 ops.go:34] apiserver oom_adj: -16
	I0912 22:16:02.720954  211844 kubeadm.go:640] restartCluster took 32.5040247s
	I0912 22:16:02.720964  211844 kubeadm.go:406] StartCluster complete in 32.581463145s
	I0912 22:16:02.720984  211844 settings.go:142] acquiring lock: {Name:mk27d6c9e2209c1484da49df89f359f1b22a9261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.721056  211844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:16:02.722576  211844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/kubeconfig: {Name:mk41a52745552a5cecc3511e6da68b50fcd6941f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.722870  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 22:16:02.722967  211844 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0912 22:16:02.725245  211844 out.go:177] * Enabled addons: 
	I0912 22:16:02.723128  211844 config.go:182] Loaded profile config "pause-959901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:16:02.723914  211844 kapi.go:59] client config for pause-959901: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 22:16:02.726698  211844 addons.go:502] enable addons completed in 3.731707ms: enabled=[]
	I0912 22:16:02.729845  211844 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-959901" context rescaled to 1 replicas
	I0912 22:16:02.729874  211844 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:16:02.731435  211844 out.go:177] * Verifying Kubernetes components...
	I0912 22:16:01.291455  223454 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 22:16:01.291474  223454 crio.go:415] Images already preloaded, skipping extraction
	I0912 22:16:01.291513  223454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:16:01.323370  223454 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 22:16:01.323394  223454 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:16:01.323443  223454 ssh_runner.go:195] Run: crio config
	I0912 22:16:01.366097  223454 cni.go:84] Creating CNI manager for "calico"
	I0912 22:16:01.366129  223454 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 22:16:01.366153  223454 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-511142 NodeName:calico-511142 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:16:01.366280  223454 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-511142"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:16:01.366344  223454 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=calico-511142 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:calico-511142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0912 22:16:01.366390  223454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 22:16:01.374632  223454 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:16:01.374695  223454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:16:01.382385  223454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0912 22:16:01.398278  223454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:16:01.414100  223454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0912 22:16:01.430265  223454 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0912 22:16:01.433269  223454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:16:01.443019  223454 certs.go:56] Setting up /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142 for IP: 192.168.85.2
	I0912 22:16:01.443049  223454 certs.go:190] acquiring lock for shared ca certs: {Name:mk61327f1fa12512fba6a15661f030034d23bf2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.443183  223454 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key
	I0912 22:16:01.443236  223454 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key
	I0912 22:16:01.443290  223454 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.key
	I0912 22:16:01.443319  223454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt with IP's: []
	I0912 22:16:01.676654  223454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt ...
	I0912 22:16:01.676680  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: {Name:mka45ef1b913de9346a5f19fd570d11dafcf85f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.676871  223454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.key ...
	I0912 22:16:01.676885  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.key: {Name:mk36dc012b44c1fe4138f5dbcd529788548f871c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.676984  223454 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key.43b9df8c
	I0912 22:16:01.676999  223454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0912 22:16:01.825408  223454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt.43b9df8c ...
	I0912 22:16:01.825436  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt.43b9df8c: {Name:mk3fcc74426c6993822e92b0b60a55a5f0c47cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.825612  223454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key.43b9df8c ...
	I0912 22:16:01.825627  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key.43b9df8c: {Name:mk53f7083d604b80843316125be439f6e63da4f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.825725  223454 certs.go:337] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt
	I0912 22:16:01.825814  223454 certs.go:341] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key
	I0912 22:16:01.825870  223454 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.key
	I0912 22:16:01.825881  223454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.crt with IP's: []
	I0912 22:16:02.097739  223454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.crt ...
	I0912 22:16:02.097768  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.crt: {Name:mkd23ffd8017921e36ac4c9139acd85c8d83a9b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.097920  223454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.key ...
	I0912 22:16:02.097930  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.key: {Name:mk87a5f3fa3343ea9b4e3fc9451edc2267b7186e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.098080  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem (1338 bytes)
	W0912 22:16:02.098119  223454 certs.go:433] ignoring /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698_empty.pem, impossibly tiny 0 bytes
	I0912 22:16:02.098134  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 22:16:02.098170  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:16:02.098196  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:16:02.098219  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem (1679 bytes)
	I0912 22:16:02.098255  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:16:02.098796  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 22:16:02.120937  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 22:16:02.142109  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:16:02.162535  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 22:16:02.183284  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:16:02.204528  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:16:02.225679  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:16:02.246630  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 22:16:02.267764  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem --> /usr/share/ca-certificates/22698.pem (1338 bytes)
	I0912 22:16:02.288498  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /usr/share/ca-certificates/226982.pem (1708 bytes)
	I0912 22:16:02.308780  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:16:02.329046  223454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:16:02.344242  223454 ssh_runner.go:195] Run: openssl version
	I0912 22:16:02.349060  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22698.pem && ln -fs /usr/share/ca-certificates/22698.pem /etc/ssl/certs/22698.pem"
	I0912 22:16:02.357102  223454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22698.pem
	I0912 22:16:02.360258  223454 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:49 /usr/share/ca-certificates/22698.pem
	I0912 22:16:02.360303  223454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22698.pem
	I0912 22:16:02.366482  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22698.pem /etc/ssl/certs/51391683.0"
	I0912 22:16:02.374337  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/226982.pem && ln -fs /usr/share/ca-certificates/226982.pem /etc/ssl/certs/226982.pem"
	I0912 22:16:02.382344  223454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/226982.pem
	I0912 22:16:02.385201  223454 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:49 /usr/share/ca-certificates/226982.pem
	I0912 22:16:02.385233  223454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/226982.pem
	I0912 22:16:02.391297  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/226982.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:16:02.400524  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:16:02.409907  223454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:16:02.413670  223454 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:16:02.413728  223454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:16:02.420633  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:16:02.428832  223454 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 22:16:02.431608  223454 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 22:16:02.431653  223454 kubeadm.go:404] StartCluster: {Name:calico-511142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-511142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:16:02.431728  223454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:16:02.431781  223454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:16:02.464318  223454 cri.go:89] found id: ""
	I0912 22:16:02.464382  223454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 22:16:02.472448  223454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 22:16:02.480340  223454 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0912 22:16:02.480406  223454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 22:16:02.487841  223454 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 22:16:02.487879  223454 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 22:16:02.532467  223454 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0912 22:16:02.532535  223454 kubeadm.go:322] [preflight] Running pre-flight checks
	I0912 22:16:02.566295  223454 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0912 22:16:02.566391  223454 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0912 22:16:02.566426  223454 kubeadm.go:322] OS: Linux
	I0912 22:16:02.566479  223454 kubeadm.go:322] CGROUPS_CPU: enabled
	I0912 22:16:02.566526  223454 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0912 22:16:02.566574  223454 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0912 22:16:02.566637  223454 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0912 22:16:02.566699  223454 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0912 22:16:02.566767  223454 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0912 22:16:02.566836  223454 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0912 22:16:02.566893  223454 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0912 22:16:02.566963  223454 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0912 22:16:02.628115  223454 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 22:16:02.628255  223454 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 22:16:02.628414  223454 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 22:16:02.838383  223454 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 22:16:02.840917  223454 out.go:204]   - Generating certificates and keys ...
	I0912 22:16:02.841079  223454 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0912 22:16:02.841167  223454 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0912 22:16:02.905532  223454 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 22:16:03.044965  223454 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0912 22:16:03.264971  223454 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0912 22:16:03.548782  223454 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0912 22:16:03.643129  223454 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0912 22:16:03.643261  223454 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [calico-511142 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0912 22:16:03.828995  223454 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0912 22:16:03.829155  223454 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [calico-511142 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0912 22:16:04.093189  223454 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 22:16:04.205006  223454 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 22:16:04.452072  223454 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0912 22:16:04.452272  223454 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 22:16:04.688079  223454 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 22:16:05.053111  223454 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 22:16:05.166787  223454 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 22:16:05.226784  223454 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 22:16:05.227967  223454 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 22:16:05.230252  223454 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 22:16:05.233130  223454 out.go:204]   - Booting up control plane ...
	I0912 22:16:05.233267  223454 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 22:16:05.233372  223454 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 22:16:05.233450  223454 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 22:16:05.241218  223454 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:16:05.242019  223454 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:16:05.242089  223454 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0912 22:16:05.322855  223454 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 22:16:02.732822  211844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:16:02.799271  211844 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0912 22:16:02.799264  211844 node_ready.go:35] waiting up to 6m0s for node "pause-959901" to be "Ready" ...
	I0912 22:16:02.891550  211844 node_ready.go:49] node "pause-959901" has status "Ready":"True"
	I0912 22:16:02.891581  211844 node_ready.go:38] duration metric: took 92.274025ms waiting for node "pause-959901" to be "Ready" ...
	I0912 22:16:02.891594  211844 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:03.094209  211844 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.491973  211844 pod_ready.go:92] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:03.491996  211844 pod_ready.go:81] duration metric: took 397.761263ms waiting for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.492009  211844 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.891703  211844 pod_ready.go:92] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:03.891726  211844 pod_ready.go:81] duration metric: took 399.709656ms waiting for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.891739  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.291560  211844 pod_ready.go:92] pod "kube-apiserver-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:04.291593  211844 pod_ready.go:81] duration metric: took 399.843007ms waiting for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.291607  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.691770  211844 pod_ready.go:92] pod "kube-controller-manager-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:04.691795  211844 pod_ready.go:81] duration metric: took 400.178718ms waiting for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.691809  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.091401  211844 pod_ready.go:92] pod "kube-proxy-z2hh7" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:05.091421  211844 pod_ready.go:81] duration metric: took 399.605265ms waiting for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.091435  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.492133  211844 pod_ready.go:92] pod "kube-scheduler-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:05.492156  211844 pod_ready.go:81] duration metric: took 400.714089ms waiting for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.492172  211844 pod_ready.go:38] duration metric: took 2.600567658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:05.492191  211844 api_server.go:52] waiting for apiserver process to appear ...
	I0912 22:16:05.492239  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:16:05.502268  211844 api_server.go:72] duration metric: took 2.772365249s to wait for apiserver process to appear ...
	I0912 22:16:05.502290  211844 api_server.go:88] waiting for apiserver healthz status ...
	I0912 22:16:05.502312  211844 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0912 22:16:05.506460  211844 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0912 22:16:05.507747  211844 api_server.go:141] control plane version: v1.28.1
	I0912 22:16:05.507769  211844 api_server.go:131] duration metric: took 5.470962ms to wait for apiserver health ...
	I0912 22:16:05.508067  211844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:16:05.694102  211844 system_pods.go:59] 7 kube-system pods found
	I0912 22:16:05.694139  211844 system_pods.go:61] "coredns-5dd5756b68-mtzsr" [ebce215d-39b5-449a-9c8f-67054a18fabf] Running
	I0912 22:16:05.694147  211844 system_pods.go:61] "etcd-pause-959901" [8bc25b38-213d-4e32-a67c-455ecf7c8b01] Running
	I0912 22:16:05.694154  211844 system_pods.go:61] "kindnet-km9nv" [d59bdd92-bd6e-408a-a28a-dbd1255077a8] Running
	I0912 22:16:05.694160  211844 system_pods.go:61] "kube-apiserver-pause-959901" [6c258963-d39e-43c1-99fc-23e16363ad27] Running
	I0912 22:16:05.694168  211844 system_pods.go:61] "kube-controller-manager-pause-959901" [08bdf00b-3dde-49a2-9182-56c41bcdf5e6] Running
	I0912 22:16:05.694175  211844 system_pods.go:61] "kube-proxy-z2hh7" [9a0e46a6-3795-4959-8b48-576a02252969] Running
	I0912 22:16:05.694179  211844 system_pods.go:61] "kube-scheduler-pause-959901" [704134f0-db48-4df5-a579-29ec78d00c2b] Running
	I0912 22:16:05.694186  211844 system_pods.go:74] duration metric: took 186.08578ms to wait for pod list to return data ...
	I0912 22:16:05.694197  211844 default_sa.go:34] waiting for default service account to be created ...
	I0912 22:16:05.891256  211844 default_sa.go:45] found service account: "default"
	I0912 22:16:05.891286  211844 default_sa.go:55] duration metric: took 197.076725ms for default service account to be created ...
	I0912 22:16:05.891298  211844 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 22:16:06.093850  211844 system_pods.go:86] 7 kube-system pods found
	I0912 22:16:06.093878  211844 system_pods.go:89] "coredns-5dd5756b68-mtzsr" [ebce215d-39b5-449a-9c8f-67054a18fabf] Running
	I0912 22:16:06.093883  211844 system_pods.go:89] "etcd-pause-959901" [8bc25b38-213d-4e32-a67c-455ecf7c8b01] Running
	I0912 22:16:06.093888  211844 system_pods.go:89] "kindnet-km9nv" [d59bdd92-bd6e-408a-a28a-dbd1255077a8] Running
	I0912 22:16:06.093892  211844 system_pods.go:89] "kube-apiserver-pause-959901" [6c258963-d39e-43c1-99fc-23e16363ad27] Running
	I0912 22:16:06.093896  211844 system_pods.go:89] "kube-controller-manager-pause-959901" [08bdf00b-3dde-49a2-9182-56c41bcdf5e6] Running
	I0912 22:16:06.093901  211844 system_pods.go:89] "kube-proxy-z2hh7" [9a0e46a6-3795-4959-8b48-576a02252969] Running
	I0912 22:16:06.093905  211844 system_pods.go:89] "kube-scheduler-pause-959901" [704134f0-db48-4df5-a579-29ec78d00c2b] Running
	I0912 22:16:06.093912  211844 system_pods.go:126] duration metric: took 202.60896ms to wait for k8s-apps to be running ...
	I0912 22:16:06.093921  211844 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 22:16:06.093960  211844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:16:06.105434  211844 system_svc.go:56] duration metric: took 11.502861ms WaitForService to wait for kubelet.
	I0912 22:16:06.105462  211844 kubeadm.go:581] duration metric: took 3.375565082s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 22:16:06.105484  211844 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:16:06.291989  211844 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:16:06.292012  211844 node_conditions.go:123] node cpu capacity is 8
	I0912 22:16:06.292022  211844 node_conditions.go:105] duration metric: took 186.533943ms to run NodePressure ...
	I0912 22:16:06.292033  211844 start.go:228] waiting for startup goroutines ...
	I0912 22:16:06.292039  211844 start.go:233] waiting for cluster config update ...
	I0912 22:16:06.292047  211844 start.go:242] writing updated cluster config ...
	I0912 22:16:06.292303  211844 ssh_runner.go:195] Run: rm -f paused
	I0912 22:16:06.354421  211844 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0912 22:16:06.357076  211844 out.go:177] * Done! kubectl is now configured to use "pause-959901" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.234824416Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-mtzsr/coredns" id=c5e4a440-8fe8-496c-b1df-d94a175dda90 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.234911964Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.265057339Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f705fed230a5d630545a209b5fed4d1a74d4b653e65ab038984b57ff8314c92/merged/etc/passwd: no such file or directory"
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.265100816Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f705fed230a5d630545a209b5fed4d1a74d4b653e65ab038984b57ff8314c92/merged/etc/group: no such file or directory"
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.378279992Z" level=info msg="Created container 1603f26d864b37e7593a85e896e949cf7e3c460aa58f7f8265a032e823128b66: kube-system/kindnet-km9nv/kindnet-cni" id=83ebfabf-2960-46bd-8ce0-f45e7563ca99 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.421314532Z" level=info msg="Starting container: 1603f26d864b37e7593a85e896e949cf7e3c460aa58f7f8265a032e823128b66" id=c223ef7e-aa98-4729-bf8d-bf69bf96eea7 name=/runtime.v1.RuntimeService/StartContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.434514914Z" level=info msg="Started container" PID=4427 containerID=1603f26d864b37e7593a85e896e949cf7e3c460aa58f7f8265a032e823128b66 description=kube-system/kindnet-km9nv/kindnet-cni id=c223ef7e-aa98-4729-bf8d-bf69bf96eea7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e8cefea9c539b6bbbefc269d85e1ae250055aed9dd7af112802e0b2983fa6bd
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.440021625Z" level=info msg="Created container b7b3942c5e9838db19e1a8fc9c458b77dc569416b66a8d45caf1aff926b1effa: kube-system/coredns-5dd5756b68-mtzsr/coredns" id=c5e4a440-8fe8-496c-b1df-d94a175dda90 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.440769635Z" level=info msg="Starting container: b7b3942c5e9838db19e1a8fc9c458b77dc569416b66a8d45caf1aff926b1effa" id=2e9e47a1-4e65-4076-8f35-ca70b1ccda09 name=/runtime.v1.RuntimeService/StartContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.451146412Z" level=info msg="Started container" PID=4435 containerID=b7b3942c5e9838db19e1a8fc9c458b77dc569416b66a8d45caf1aff926b1effa description=kube-system/coredns-5dd5756b68-mtzsr/coredns id=2e9e47a1-4e65-4076-8f35-ca70b1ccda09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=41ecaa7d266ffa580bd52eec24e048623e97a1bece2d119dc2ef194abaa56238
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.456799817Z" level=info msg="Created container 897b8d78ec7238c97cc1b8b196d8958795e446780ba2d1dfdb8c6f59509c68ca: kube-system/kube-proxy-z2hh7/kube-proxy" id=128fbe78-2696-46f3-aeaf-51265830b05a name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.457567587Z" level=info msg="Starting container: 897b8d78ec7238c97cc1b8b196d8958795e446780ba2d1dfdb8c6f59509c68ca" id=fb0b3553-8051-4a4b-8d52-74b1dd345233 name=/runtime.v1.RuntimeService/StartContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.527340188Z" level=info msg="Started container" PID=4451 containerID=897b8d78ec7238c97cc1b8b196d8958795e446780ba2d1dfdb8c6f59509c68ca description=kube-system/kube-proxy-z2hh7/kube-proxy id=fb0b3553-8051-4a4b-8d52-74b1dd345233 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3c32f7ab3305aec24628e3a50810c4f9d3a77f0be5bb3e1c53453b1e8a1a550
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.020931661Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.025726800Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.025761986Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.025780048Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.029395747Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.029419462Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.029431346Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.032467811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.032490070Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.032503438Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.035766900Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.035786011Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b7b3942c5e983       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   19 seconds ago      Running             coredns                   2                   41ecaa7d266ff       coredns-5dd5756b68-mtzsr
	897b8d78ec723       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   19 seconds ago      Running             kube-proxy                3                   c3c32f7ab3305       kube-proxy-z2hh7
	1603f26d864b3       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc   19 seconds ago      Running             kindnet-cni               3                   3e8cefea9c539       kindnet-km9nv
	4c5ca908c2b82       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   24 seconds ago      Running             kube-controller-manager   3                   3f64d56d033b3       kube-controller-manager-pause-959901
	21c414acf582f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago      Running             etcd                      3                   21f89d0b5ddd2       etcd-pause-959901
	2c734cf8137d9       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   24 seconds ago      Running             kube-scheduler            3                   e95e4f41c7e26       kube-scheduler-pause-959901
	d3642e75c030e       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   24 seconds ago      Running             kube-apiserver            2                   1dced900bc3c4       kube-apiserver-pause-959901
	c48215f38677a       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   29 seconds ago      Exited              kube-scheduler            2                   e95e4f41c7e26       kube-scheduler-pause-959901
	547dc8b525719       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   35 seconds ago      Exited              kube-proxy                2                   c3c32f7ab3305       kube-proxy-z2hh7
	bb8b7ab1358b0       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc   35 seconds ago      Exited              kindnet-cni               2                   3e8cefea9c539       kindnet-km9nv
	8caf71e9a8547       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   35 seconds ago      Exited              kube-controller-manager   2                   3f64d56d033b3       kube-controller-manager-pause-959901
	3b24877fff317       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   35 seconds ago      Exited              etcd                      2                   21f89d0b5ddd2       etcd-pause-959901
	dda5a9b46878c       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   48 seconds ago      Exited              kube-apiserver            1                   1dced900bc3c4       kube-apiserver-pause-959901
	35a9cfcc69267       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   48 seconds ago      Exited              coredns                   1                   41ecaa7d266ff       coredns-5dd5756b68-mtzsr
	
	* 
	* ==> coredns [35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51796 - 16835 "HINFO IN 609490615299251194.5076587521202352489. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012172396s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [b7b3942c5e9838db19e1a8fc9c458b77dc569416b66a8d45caf1aff926b1effa] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34959 - 20434 "HINFO IN 4640168976983627359.2460408328691745340. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.086687202s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-959901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-959901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1
	                    minikube.k8s.io/name=pause-959901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T22_14_50_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 22:14:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-959901
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 22:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 22:15:46 +0000   Tue, 12 Sep 2023 22:14:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 22:15:46 +0000   Tue, 12 Sep 2023 22:14:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 22:15:46 +0000   Tue, 12 Sep 2023 22:14:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 22:15:46 +0000   Tue, 12 Sep 2023 22:15:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-959901
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c4baa6b94c94648824c3c90ad6b4915
	  System UUID:                34eec731-f746-4460-992e-1e0db2bf2d99
	  Boot ID:                    ba5f5c49-ab96-46a2-94a7-f55592fcb8c1
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-mtzsr                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     65s
	  kube-system                 etcd-pause-959901                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         79s
	  kube-system                 kindnet-km9nv                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      65s
	  kube-system                 kube-apiserver-pause-959901             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-pause-959901    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-proxy-z2hh7                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         65s
	  kube-system                 kube-scheduler-pause-959901             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 64s                kube-proxy       
	  Normal  Starting                 19s                kube-proxy       
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node pause-959901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node pause-959901 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x8 over 84s)  kubelet          Node pause-959901 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     78s                kubelet          Node pause-959901 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  78s                kubelet          Node pause-959901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    78s                kubelet          Node pause-959901 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 78s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           65s                node-controller  Node pause-959901 event: Registered Node pause-959901 in Controller
	  Normal  NodeReady                62s                kubelet          Node pause-959901 status is now: NodeReady
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-959901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-959901 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x8 over 25s)  kubelet          Node pause-959901 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9s                 node-controller  Node pause-959901 event: Registered Node pause-959901 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep12 21:54] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[ +32.764792] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[Sep12 22:03] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000008] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +1.027394] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000006] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +2.011799] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000007] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +4.095589] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000008] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +8.191199] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000005] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[Sep12 22:05] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000006] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +1.031483] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000007] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +2.019755] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000006] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +4.255579] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000006] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +8.187238] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000009] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[Sep12 22:12] process 'docker/tmp/qemu-check536253658/check' started with executable stack
	
	* 
	* ==> etcd [21c414acf582f220c38426233c74918a1def7720a72165202d1cc5a3b6931590] <==
	* {"level":"info","ts":"2023-09-12T22:15:43.430983Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-12T22:15:43.431249Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-12T22:15:43.431324Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-12T22:15:43.431342Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2023-09-12T22:15:43.431408Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2023-09-12T22:15:45.066306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:45.066352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:45.066393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:45.066408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 4"}
	{"level":"info","ts":"2023-09-12T22:15:45.066414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 4"}
	{"level":"info","ts":"2023-09-12T22:15:45.066422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 4"}
	{"level":"info","ts":"2023-09-12T22:15:45.066429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 4"}
	{"level":"info","ts":"2023-09-12T22:15:45.06782Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:pause-959901 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T22:15:45.067858Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:15:45.067835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:15:45.068014Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T22:15:45.068062Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T22:15:45.069327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2023-09-12T22:15:45.069435Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T22:15:56.674838Z","caller":"traceutil/trace.go:171","msg":"trace[1788256736] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"105.50975ms","start":"2023-09-12T22:15:56.569303Z","end":"2023-09-12T22:15:56.674813Z","steps":["trace[1788256736] 'process raft request'  (duration: 94.027077ms)","trace[1788256736] 'compare'  (duration: 11.299261ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T22:15:57.109917Z","caller":"traceutil/trace.go:171","msg":"trace[241787556] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"177.357169ms","start":"2023-09-12T22:15:56.932542Z","end":"2023-09-12T22:15:57.109899Z","steps":["trace[241787556] 'process raft request'  (duration: 116.672895ms)","trace[241787556] 'compare'  (duration: 60.592835ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T22:15:57.298223Z","caller":"traceutil/trace.go:171","msg":"trace[786601345] linearizableReadLoop","detail":"{readStateIndex:537; appliedIndex:536; }","duration":"108.445578ms","start":"2023-09-12T22:15:57.189761Z","end":"2023-09-12T22:15:57.298207Z","steps":["trace[786601345] 'read index received'  (duration: 49.354996ms)","trace[786601345] 'applied index is now lower than readState.Index'  (duration: 59.089877ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T22:15:57.298337Z","caller":"traceutil/trace.go:171","msg":"trace[1661722603] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"179.471387ms","start":"2023-09-12T22:15:57.118839Z","end":"2023-09-12T22:15:57.29831Z","steps":["trace[1661722603] 'process raft request'  (duration: 120.328252ms)","trace[1661722603] 'compare'  (duration: 58.928244ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-12T22:15:57.29842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.664012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-959901\" ","response":"range_response_count:1 size:5458"}
	{"level":"info","ts":"2023-09-12T22:15:57.298478Z","caller":"traceutil/trace.go:171","msg":"trace[1626776588] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-959901; range_end:; response_count:1; response_revision:506; }","duration":"108.745835ms","start":"2023-09-12T22:15:57.189721Z","end":"2023-09-12T22:15:57.298467Z","steps":["trace[1626776588] 'agreement among raft nodes before linearized reading'  (duration: 108.57011ms)"],"step_count":1}
	
	* 
	* ==> etcd [3b24877fff317c769401f6e12bbbde35264392f46957545a2e6c00fca5d730b3] <==
	* {"level":"info","ts":"2023-09-12T22:15:32.156993Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-12T22:15:33.943128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-12T22:15:33.943177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-12T22:15:33.943212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2023-09-12T22:15:33.943229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:33.943235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:33.943243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:33.94325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:33.94401Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:pause-959901 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T22:15:33.94404Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:15:33.944042Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:15:33.944238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T22:15:33.944282Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T22:15:33.945252Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2023-09-12T22:15:33.945417Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T22:15:40.847166Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-12T22:15:40.847249Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-959901","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	{"level":"warn","ts":"2023-09-12T22:15:40.847336Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T22:15:40.84737Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T22:15:40.849057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T22:15:40.849105Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-12T22:15:40.849164Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dfc97eb0aae75b33","current-leader-member-id":"dfc97eb0aae75b33"}
	{"level":"info","ts":"2023-09-12T22:15:40.851153Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2023-09-12T22:15:40.851247Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2023-09-12T22:15:40.851269Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-959901","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:16:07 up  1:58,  0 users,  load average: 3.72, 3.59, 2.16
	Linux pause-959901 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1603f26d864b37e7593a85e896e949cf7e3c460aa58f7f8265a032e823128b66] <==
	* I0912 22:15:48.527807       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0912 22:15:48.527886       1 main.go:107] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0912 22:15:48.528066       1 main.go:116] setting mtu 1500 for CNI 
	I0912 22:15:48.528094       1 main.go:146] kindnetd IP family: "ipv4"
	I0912 22:15:48.528117       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0912 22:15:48.928564       1 main.go:223] Handling node with IPs: map[192.168.94.2:{}]
	I0912 22:15:49.020671       1 main.go:227] handling current node
	I0912 22:15:59.035040       1 main.go:223] Handling node with IPs: map[192.168.94.2:{}]
	I0912 22:15:59.035107       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [bb8b7ab1358b0c1c296fdf8d6498c75a101663022de35fc9032435d60ea67ac6] <==
	* I0912 22:15:32.029514       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0912 22:15:32.029582       1 main.go:107] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0912 22:15:32.029763       1 main.go:116] setting mtu 1500 for CNI 
	I0912 22:15:32.029790       1 main.go:146] kindnetd IP family: "ipv4"
	I0912 22:15:32.029815       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0912 22:15:32.347719       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0912 22:15:32.421470       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0912 22:15:33.422725       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0912 22:15:35.424316       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0912 22:15:38.425031       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kube-apiserver [d3642e75c030ea3fe88aa1063dbe616912d613b99421d602b2ccccc303608f9b] <==
	* I0912 22:15:46.241609       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0912 22:15:46.242177       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0912 22:15:46.242192       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0912 22:15:46.242945       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 22:15:46.243032       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0912 22:15:46.342753       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0912 22:15:46.343334       1 aggregator.go:166] initial CRD sync complete...
	I0912 22:15:46.343405       1 autoregister_controller.go:141] Starting autoregister controller
	I0912 22:15:46.343435       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 22:15:46.438149       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0912 22:15:46.438190       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0912 22:15:46.438214       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 22:15:46.439331       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0912 22:15:46.440320       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0912 22:15:46.441468       1 shared_informer.go:318] Caches are synced for configmaps
	I0912 22:15:46.441700       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0912 22:15:46.443700       1 cache.go:39] Caches are synced for autoregister controller
	E0912 22:15:46.443803       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0912 22:15:46.524703       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 22:15:47.244395       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 22:15:48.157972       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0912 22:15:48.263495       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0912 22:15:48.278945       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0912 22:15:48.522139       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 22:15:48.532513       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:15:22.563845       1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:15:23.246145       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:15:23.858198       1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [4c5ca908c2b82e4e21818aaff746f200623a8890e73cfe552129eff8ac2c746c] <==
	* I0912 22:15:58.889395       1 shared_informer.go:318] Caches are synced for taint
	I0912 22:15:58.889578       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0912 22:15:58.889560       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0912 22:15:58.889652       1 taint_manager.go:211] "Sending events to api server"
	I0912 22:15:58.889709       1 event.go:307] "Event occurred" object="pause-959901" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-959901 event: Registered Node pause-959901 in Controller"
	I0912 22:15:58.889789       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-959901"
	I0912 22:15:58.889861       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0912 22:15:58.916778       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 22:15:58.921910       1 shared_informer.go:318] Caches are synced for daemon sets
	I0912 22:15:58.926572       1 shared_informer.go:318] Caches are synced for GC
	I0912 22:15:58.928727       1 shared_informer.go:318] Caches are synced for stateful set
	I0912 22:15:58.933030       1 shared_informer.go:318] Caches are synced for PVC protection
	I0912 22:15:58.935289       1 shared_informer.go:318] Caches are synced for persistent volume
	I0912 22:15:58.940704       1 shared_informer.go:318] Caches are synced for HPA
	I0912 22:15:58.940738       1 shared_informer.go:318] Caches are synced for attach detach
	I0912 22:15:58.947976       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0912 22:15:58.950344       1 shared_informer.go:318] Caches are synced for job
	I0912 22:15:58.961700       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0912 22:15:58.961818       1 shared_informer.go:318] Caches are synced for endpoint
	I0912 22:15:58.962480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.659µs"
	I0912 22:15:58.963678       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 22:15:59.003624       1 shared_informer.go:318] Caches are synced for disruption
	I0912 22:15:59.338686       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 22:15:59.395777       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 22:15:59.395813       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [8caf71e9a85470072c52a58288dd4b14ca4aa7ba679faa64ae9099b4d063ab7e] <==
	* I0912 22:15:32.842460       1 serving.go:348] Generated self-signed cert in-memory
	I0912 22:15:33.518684       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0912 22:15:33.518711       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:15:33.519892       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0912 22:15:33.519943       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 22:15:33.520708       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0912 22:15:33.520811       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [547dc8b5257194c7183b06b0608a04605dd9a50f31093513e51cffc696212e83] <==
	* I0912 22:15:32.190149       1 server_others.go:69] "Using iptables proxy"
	E0912 22:15:32.221576       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-959901": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:33.410754       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-959901": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:35.565333       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-959901": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.337154       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-959901": dial tcp 192.168.94.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [897b8d78ec7238c97cc1b8b196d8958795e446780ba2d1dfdb8c6f59509c68ca] <==
	* I0912 22:15:48.631303       1 server_others.go:69] "Using iptables proxy"
	I0912 22:15:48.646830       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I0912 22:15:48.674740       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 22:15:48.676941       1 server_others.go:152] "Using iptables Proxier"
	I0912 22:15:48.676976       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0912 22:15:48.676986       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0912 22:15:48.677024       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 22:15:48.677224       1 server.go:846] "Version info" version="v1.28.1"
	I0912 22:15:48.677243       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:15:48.678043       1 config.go:188] "Starting service config controller"
	I0912 22:15:48.678070       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 22:15:48.678093       1 config.go:97] "Starting endpoint slice config controller"
	I0912 22:15:48.678098       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 22:15:48.678586       1 config.go:315] "Starting node config controller"
	I0912 22:15:48.678599       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 22:15:48.778227       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0912 22:15:48.778233       1 shared_informer.go:318] Caches are synced for service config
	I0912 22:15:48.778665       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2c734cf8137d954a6bc98afaf7aa836fa45edde7af6f3364cd7ef7b889371894] <==
	* I0912 22:15:46.348359       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0912 22:15:46.423857       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 22:15:46.423898       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 22:15:46.423922       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0912 22:15:46.433577       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.433630       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.434018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0912 22:15:46.434048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0912 22:15:46.434184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0912 22:15:46.434213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0912 22:15:46.434308       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.434399       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0912 22:15:46.434444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0912 22:15:46.434559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.434718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.435200       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.435228       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.435241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.435250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	I0912 22:15:47.624948       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c48215f38677afd9eb1eac9c278231055b2104f5efa40af0df3b07dde9952f9e] <==
	* W0912 22:15:39.137663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.137755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:39.137767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.137812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:39.137808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.137861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:39.954991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.955039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:39.995528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.995562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.161164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.161206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.223768       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.223807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.286791       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.286833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.286850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.287028       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.294369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.294433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.322643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.322705       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.330387       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0912 22:15:40.330650       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0912 22:15:40.330725       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Sep 12 22:15:46 pause-959901 kubelet[4082]: E0912 22:15:46.431959    4082 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-959901" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-959901' and this object
	Sep 12 22:15:46 pause-959901 kubelet[4082]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.432247    4082 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.446031    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d59bdd92-bd6e-408a-a28a-dbd1255077a8-lib-modules\") pod \"kindnet-km9nv\" (UID: \"d59bdd92-bd6e-408a-a28a-dbd1255077a8\") " pod="kube-system/kindnet-km9nv"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.446085    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a0e46a6-3795-4959-8b48-576a02252969-xtables-lock\") pod \"kube-proxy-z2hh7\" (UID: \"9a0e46a6-3795-4959-8b48-576a02252969\") " pod="kube-system/kube-proxy-z2hh7"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.446115    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a0e46a6-3795-4959-8b48-576a02252969-lib-modules\") pod \"kube-proxy-z2hh7\" (UID: \"9a0e46a6-3795-4959-8b48-576a02252969\") " pod="kube-system/kube-proxy-z2hh7"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.521197    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d59bdd92-bd6e-408a-a28a-dbd1255077a8-cni-cfg\") pod \"kindnet-km9nv\" (UID: \"d59bdd92-bd6e-408a-a28a-dbd1255077a8\") " pod="kube-system/kindnet-km9nv"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.521263    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d59bdd92-bd6e-408a-a28a-dbd1255077a8-xtables-lock\") pod \"kindnet-km9nv\" (UID: \"d59bdd92-bd6e-408a-a28a-dbd1255077a8\") " pod="kube-system/kindnet-km9nv"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.533287    4082 kubelet_node_status.go:108] "Node was previously registered" node="pause-959901"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.533401    4082 kubelet_node_status.go:73] "Successfully registered node" node="pause-959901"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.534753    4082 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.535737    4082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538082    4082 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538137    4082 projected.go:198] Error preparing data for projected volume kube-api-access-t48bd for pod kube-system/kindnet-km9nv: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538084    4082 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538220    4082 projected.go:198] Error preparing data for projected volume kube-api-access-dvggc for pod kube-system/coredns-5dd5756b68-mtzsr: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538229    4082 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d59bdd92-bd6e-408a-a28a-dbd1255077a8-kube-api-access-t48bd podName:d59bdd92-bd6e-408a-a28a-dbd1255077a8 nodeName:}" failed. No retries permitted until 2023-09-12 22:15:48.038204 +0000 UTC m=+5.715042289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t48bd" (UniqueName: "kubernetes.io/projected/d59bdd92-bd6e-408a-a28a-dbd1255077a8-kube-api-access-t48bd") pod "kindnet-km9nv" (UID: "d59bdd92-bd6e-408a-a28a-dbd1255077a8") : failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538082    4082 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538290    4082 projected.go:198] Error preparing data for projected volume kube-api-access-mtz4x for pod kube-system/kube-proxy-z2hh7: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538296    4082 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebce215d-39b5-449a-9c8f-67054a18fabf-kube-api-access-dvggc podName:ebce215d-39b5-449a-9c8f-67054a18fabf nodeName:}" failed. No retries permitted until 2023-09-12 22:15:48.038272189 +0000 UTC m=+5.715110507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dvggc" (UniqueName: "kubernetes.io/projected/ebce215d-39b5-449a-9c8f-67054a18fabf-kube-api-access-dvggc") pod "coredns-5dd5756b68-mtzsr" (UID: "ebce215d-39b5-449a-9c8f-67054a18fabf") : failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538340    4082 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a0e46a6-3795-4959-8b48-576a02252969-kube-api-access-mtz4x podName:9a0e46a6-3795-4959-8b48-576a02252969 nodeName:}" failed. No retries permitted until 2023-09-12 22:15:48.038327455 +0000 UTC m=+5.715165757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mtz4x" (UniqueName: "kubernetes.io/projected/9a0e46a6-3795-4959-8b48-576a02252969-kube-api-access-mtz4x") pod "kube-proxy-z2hh7" (UID: "9a0e46a6-3795-4959-8b48-576a02252969") : failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:48 pause-959901 kubelet[4082]: I0912 22:15:48.230958    4082 scope.go:117] "RemoveContainer" containerID="bb8b7ab1358b0c1c296fdf8d6498c75a101663022de35fc9032435d60ea67ac6"
	Sep 12 22:15:48 pause-959901 kubelet[4082]: I0912 22:15:48.231581    4082 scope.go:117] "RemoveContainer" containerID="547dc8b5257194c7183b06b0608a04605dd9a50f31093513e51cffc696212e83"
	Sep 12 22:15:48 pause-959901 kubelet[4082]: I0912 22:15:48.231699    4082 scope.go:117] "RemoveContainer" containerID="35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7"
	Sep 12 22:15:56 pause-959901 kubelet[4082]: I0912 22:15:56.477389    4082 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-959901 -n pause-959901
helpers_test.go:261: (dbg) Run:  kubectl --context pause-959901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-959901
helpers_test.go:235: (dbg) docker inspect pause-959901:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0",
	        "Created": "2023-09-12T22:14:34.765978332Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 203933,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-09-12T22:14:35.144690774Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0508862d812894c98deaaf3533e6d3386b479df1d249d4410a6247f1f44ad45d",
	        "ResolvConfPath": "/var/lib/docker/containers/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0/hosts",
	        "LogPath": "/var/lib/docker/containers/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0/4411e8f8fd2db4fb77ae83eea5022b4758e0804fc47d7c636b15363366d270e0-json.log",
	        "Name": "/pause-959901",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-959901:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-959901",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2147483648,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/a3ab3e9fa699299081fad357f70a8a3aef7943a290da4250dc77d335655b3e7b-init/diff:/var/lib/docker/overlay2/27d59bddd44498ba277aabbca5bbef44e363739d94cbe3a544670a142640c048/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a3ab3e9fa699299081fad357f70a8a3aef7943a290da4250dc77d335655b3e7b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a3ab3e9fa699299081fad357f70a8a3aef7943a290da4250dc77d335655b3e7b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a3ab3e9fa699299081fad357f70a8a3aef7943a290da4250dc77d335655b3e7b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-959901",
	                "Source": "/var/lib/docker/volumes/pause-959901/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-959901",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-959901",
	                "name.minikube.sigs.k8s.io": "pause-959901",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f6ab422847be213febd454a05d70f76eeb28d6da2296817e28f819199921667c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32984"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32983"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/f6ab422847be",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-959901": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4411e8f8fd2d",
	                        "pause-959901"
	                    ],
	                    "NetworkID": "ebe095e8c57d41a952ae6f61cf6e3d174e928370aa959f0a328c56aba3e0c643",
	                    "EndpointID": "a50fc63649a796516db7b995cdade53c8aa5a3d5d83d12e972dd9fdac7e29220",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-959901 -n pause-959901
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-959901 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-959901 logs -n 25: (1.549650776s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat kubelet --no-pager                               |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo journalctl                       | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | -xeu kubelet --all --full                            |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | status docker --all --full                           |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo docker                           | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo                                  | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo cat                              | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo containerd                       | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo systemctl                        | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo find                             | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-511142 sudo crio                             | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| delete  | -p auto-511142                                       | auto-511142    | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC | 12 Sep 23 22:15 UTC |
	| start   | -p calico-511142 --memory=3072                       | calico-511142  | jenkins | v1.31.2 | 12 Sep 23 22:15 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                |         |         |                     |                     |
	|         | --cni=calico --driver=docker                         |                |         |         |                     |                     |
	|         | --container-runtime=crio                             |                |         |         |                     |                     |
	| ssh     | -p kindnet-511142 pgrep -a                           | kindnet-511142 | jenkins | v1.31.2 | 12 Sep 23 22:16 UTC | 12 Sep 23 22:16 UTC |
	|         | kubelet                                              |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 22:15:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 22:15:51.288023  223454 out.go:296] Setting OutFile to fd 1 ...
	I0912 22:15:51.288330  223454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:15:51.288340  223454 out.go:309] Setting ErrFile to fd 2...
	I0912 22:15:51.288348  223454 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:15:51.288542  223454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 22:15:51.289142  223454 out.go:303] Setting JSON to false
	I0912 22:15:51.290479  223454 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7099,"bootTime":1694549852,"procs":489,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:15:51.290549  223454 start.go:138] virtualization: kvm guest
	I0912 22:15:51.293259  223454 out.go:177] * [calico-511142] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:15:51.294995  223454 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 22:15:51.296518  223454 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:15:51.295113  223454 notify.go:220] Checking for updates...
	I0912 22:15:51.299256  223454 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:15:51.300702  223454 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 22:15:51.302271  223454 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:15:51.303900  223454 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:15:51.305737  223454 config.go:182] Loaded profile config "kindnet-511142": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:51.305868  223454 config.go:182] Loaded profile config "kubernetes-upgrade-533888": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:51.306024  223454 config.go:182] Loaded profile config "pause-959901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:51.306128  223454 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 22:15:51.330976  223454 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 22:15:51.331066  223454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:15:51.392574  223454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-09-12 22:15:51.383972676 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:15:51.392721  223454 docker.go:294] overlay module found
	I0912 22:15:51.394578  223454 out.go:177] * Using the docker driver based on user configuration
	I0912 22:15:47.446089  211844 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0912 22:15:47.450909  211844 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0912 22:15:47.450927  211844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0912 22:15:47.470449  211844 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0912 22:15:48.164407  211844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:15:48.174574  211844 system_pods.go:59] 7 kube-system pods found
	I0912 22:15:48.174604  211844 system_pods.go:61] "coredns-5dd5756b68-mtzsr" [ebce215d-39b5-449a-9c8f-67054a18fabf] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0912 22:15:48.174612  211844 system_pods.go:61] "etcd-pause-959901" [8bc25b38-213d-4e32-a67c-455ecf7c8b01] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0912 22:15:48.174621  211844 system_pods.go:61] "kindnet-km9nv" [d59bdd92-bd6e-408a-a28a-dbd1255077a8] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0912 22:15:48.174629  211844 system_pods.go:61] "kube-apiserver-pause-959901" [6c258963-d39e-43c1-99fc-23e16363ad27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0912 22:15:48.174636  211844 system_pods.go:61] "kube-controller-manager-pause-959901" [08bdf00b-3dde-49a2-9182-56c41bcdf5e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0912 22:15:48.174643  211844 system_pods.go:61] "kube-proxy-z2hh7" [9a0e46a6-3795-4959-8b48-576a02252969] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0912 22:15:48.174651  211844 system_pods.go:61] "kube-scheduler-pause-959901" [704134f0-db48-4df5-a579-29ec78d00c2b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0912 22:15:48.174659  211844 system_pods.go:74] duration metric: took 10.226101ms to wait for pod list to return data ...
	I0912 22:15:48.174668  211844 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:15:48.177742  211844 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:15:48.177765  211844 node_conditions.go:123] node cpu capacity is 8
	I0912 22:15:48.177774  211844 node_conditions.go:105] duration metric: took 3.098914ms to run NodePressure ...
	I0912 22:15:48.177794  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0912 22:15:48.544645  211844 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0912 22:15:48.549667  211844 kubeadm.go:787] kubelet initialised
	I0912 22:15:48.549695  211844 kubeadm.go:788] duration metric: took 5.019167ms waiting for restarted kubelet to initialise ...
	I0912 22:15:48.549705  211844 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:15:48.555832  211844 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:15:50.631682  211844 pod_ready.go:102] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:51.396065  223454 start.go:298] selected driver: docker
	I0912 22:15:51.396084  223454 start.go:902] validating driver "docker" against <nil>
	I0912 22:15:51.396098  223454 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:15:51.396970  223454 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:15:51.447595  223454 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:66 SystemTime:2023-09-12 22:15:51.438663805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:15:51.447752  223454 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 22:15:51.447957  223454 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0912 22:15:51.449415  223454 out.go:177] * Using Docker driver with root privileges
	I0912 22:15:51.450698  223454 cni.go:84] Creating CNI manager for "calico"
	I0912 22:15:51.450715  223454 start_flags.go:316] Found "Calico" CNI - setting NetworkPlugin=cni
	I0912 22:15:51.450727  223454 start_flags.go:321] config:
	{Name:calico-511142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-511142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:15:51.452103  223454 out.go:177] * Starting control plane node calico-511142 in cluster calico-511142
	I0912 22:15:51.453308  223454 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 22:15:51.454634  223454 out.go:177] * Pulling base image ...
	I0912 22:15:51.455952  223454 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:15:51.455981  223454 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 22:15:51.456027  223454 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0912 22:15:51.456043  223454 cache.go:57] Caching tarball of preloaded images
	I0912 22:15:51.456132  223454 preload.go:174] Found /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0912 22:15:51.456148  223454 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0912 22:15:51.456266  223454 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/config.json ...
	I0912 22:15:51.456291  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/config.json: {Name:mkb69e099ad8791de986653559089df7dc54b7f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:15:51.472079  223454 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon, skipping pull
	I0912 22:15:51.472104  223454 cache.go:145] gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 exists in daemon, skipping load
	I0912 22:15:51.472156  223454 cache.go:195] Successfully downloaded all kic artifacts
	I0912 22:15:51.472207  223454 start.go:365] acquiring machines lock for calico-511142: {Name:mk6e488ee73b47a40a81d830ddbf2a15f85393b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0912 22:15:51.472309  223454 start.go:369] acquired machines lock for "calico-511142" in 77.105µs
	I0912 22:15:51.472333  223454 start.go:93] Provisioning new machine with config: &{Name:calico-511142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-511142 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:15:51.472451  223454 start.go:125] createHost starting for "" (driver="docker")
	I0912 22:15:47.786503  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:48.285667  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:48.786043  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:49.286584  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:49.785973  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:50.286117  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:50.785881  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:51.285990  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:51.785886  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:52.286597  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:47.508241  187890 cri.go:89] found id: ""
	I0912 22:15:47.508268  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.508277  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:15:47.508284  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:15:47.508351  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:15:47.550252  187890 cri.go:89] found id: ""
	I0912 22:15:47.550280  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.550290  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:15:47.550298  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:15:47.550353  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:15:47.590302  187890 cri.go:89] found id: ""
	I0912 22:15:47.590330  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.590340  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:15:47.590348  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:15:47.590401  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:15:47.633411  187890 cri.go:89] found id: ""
	I0912 22:15:47.633438  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.633448  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:15:47.633457  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:15:47.633507  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:15:47.672381  187890 cri.go:89] found id: ""
	I0912 22:15:47.672403  187890 logs.go:284] 0 containers: []
	W0912 22:15:47.672410  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:15:47.672419  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:15:47.672433  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:47.716906  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:15:47.716936  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:15:47.747554  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:15:47.747597  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:15:47.797518  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:15:47.797546  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:15:47.901931  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:15:47.901963  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:15:47.919706  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:15:47.919736  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:15:47.993295  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:15:50.494155  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:15:50.494596  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:15:50.494647  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:15:50.494707  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:15:50.591532  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:50.591556  187890 cri.go:89] found id: ""
	I0912 22:15:50.591562  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:15:50.591603  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:15:50.595906  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:15:50.595968  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:15:50.631504  187890 cri.go:89] found id: ""
	I0912 22:15:50.631534  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.631541  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:15:50.631547  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:15:50.631602  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:15:50.666238  187890 cri.go:89] found id: ""
	I0912 22:15:50.666263  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.666270  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:15:50.666276  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:15:50.666318  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:15:50.703457  187890 cri.go:89] found id: ""
	I0912 22:15:50.703488  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.703497  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:15:50.703506  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:15:50.703564  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:15:50.738123  187890 cri.go:89] found id: ""
	I0912 22:15:50.738152  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.738162  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:15:50.738170  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:15:50.738217  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:15:50.771989  187890 cri.go:89] found id: ""
	I0912 22:15:50.772017  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.772029  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:15:50.772037  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:15:50.772091  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:15:50.807556  187890 cri.go:89] found id: ""
	I0912 22:15:50.807590  187890 logs.go:284] 0 containers: []
	W0912 22:15:50.807601  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:15:50.807611  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:15:50.807676  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:15:51.001275  187890 cri.go:89] found id: ""
	I0912 22:15:51.001298  187890 logs.go:284] 0 containers: []
	W0912 22:15:51.001305  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:15:51.001313  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:15:51.001327  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:15:51.093714  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:15:51.093747  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:15:51.154354  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:15:51.154382  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:15:51.219306  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:15:51.219430  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:15:51.219450  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:51.266577  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:15:51.266611  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:15:51.294163  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:15:51.294193  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:15:52.785986  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:53.285896  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:53.785810  213173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0912 22:15:53.864227  213173 kubeadm.go:1081] duration metric: took 11.913689267s to wait for elevateKubeSystemPrivileges.
	I0912 22:15:53.864263  213173 kubeadm.go:406] StartCluster complete in 22.407904287s
	I0912 22:15:53.864284  213173 settings.go:142] acquiring lock: {Name:mk27d6c9e2209c1484da49df89f359f1b22a9261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:15:53.864358  213173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:15:53.865749  213173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/kubeconfig: {Name:mk41a52745552a5cecc3511e6da68b50fcd6941f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:15:53.865989  213173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 22:15:53.866250  213173 config.go:182] Loaded profile config "kindnet-511142": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:53.866430  213173 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0912 22:15:53.866496  213173 addons.go:69] Setting storage-provisioner=true in profile "kindnet-511142"
	I0912 22:15:53.866518  213173 addons.go:231] Setting addon storage-provisioner=true in "kindnet-511142"
	I0912 22:15:53.866573  213173 host.go:66] Checking if "kindnet-511142" exists ...
	I0912 22:15:53.866641  213173 addons.go:69] Setting default-storageclass=true in profile "kindnet-511142"
	I0912 22:15:53.866661  213173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-511142"
	I0912 22:15:53.866913  213173 cli_runner.go:164] Run: docker container inspect kindnet-511142 --format={{.State.Status}}
	I0912 22:15:53.867082  213173 cli_runner.go:164] Run: docker container inspect kindnet-511142 --format={{.State.Status}}
	I0912 22:15:53.899997  213173 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-511142" context rescaled to 1 replicas
	I0912 22:15:53.900032  213173 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:15:53.902537  213173 out.go:177] * Verifying Kubernetes components...
	I0912 22:15:53.903900  213173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:15:53.905298  213173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0912 22:15:53.906646  213173 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 22:15:53.906663  213173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0912 22:15:53.906723  213173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-511142
	I0912 22:15:53.916740  213173 addons.go:231] Setting addon default-storageclass=true in "kindnet-511142"
	I0912 22:15:53.916783  213173 host.go:66] Checking if "kindnet-511142" exists ...
	I0912 22:15:53.917108  213173 cli_runner.go:164] Run: docker container inspect kindnet-511142 --format={{.State.Status}}
	I0912 22:15:53.932173  213173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/kindnet-511142/id_rsa Username:docker}
	I0912 22:15:53.959298  213173 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0912 22:15:53.959324  213173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0912 22:15:53.959377  213173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-511142
	I0912 22:15:53.974952  213173 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0912 22:15:53.980312  213173 node_ready.go:35] waiting up to 15m0s for node "kindnet-511142" to be "Ready" ...
	I0912 22:15:53.984765  213173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32994 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/kindnet-511142/id_rsa Username:docker}
	I0912 22:15:54.143524  213173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0912 22:15:54.157704  213173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0912 22:15:54.549858  213173 start.go:917] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0912 22:15:55.110224  213173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0912 22:15:51.474397  223454 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0912 22:15:51.474698  223454 start.go:159] libmachine.API.Create for "calico-511142" (driver="docker")
	I0912 22:15:51.474737  223454 client.go:168] LocalClient.Create starting
	I0912 22:15:51.474803  223454 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem
	I0912 22:15:51.474850  223454 main.go:141] libmachine: Decoding PEM data...
	I0912 22:15:51.474870  223454 main.go:141] libmachine: Parsing certificate...
	I0912 22:15:51.474919  223454 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem
	I0912 22:15:51.474940  223454 main.go:141] libmachine: Decoding PEM data...
	I0912 22:15:51.474951  223454 main.go:141] libmachine: Parsing certificate...
	I0912 22:15:51.475255  223454 cli_runner.go:164] Run: docker network inspect calico-511142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0912 22:15:51.491342  223454 cli_runner.go:211] docker network inspect calico-511142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0912 22:15:51.491426  223454 network_create.go:281] running [docker network inspect calico-511142] to gather additional debugging logs...
	I0912 22:15:51.491449  223454 cli_runner.go:164] Run: docker network inspect calico-511142
	W0912 22:15:51.507224  223454 cli_runner.go:211] docker network inspect calico-511142 returned with exit code 1
	I0912 22:15:51.507252  223454 network_create.go:284] error running [docker network inspect calico-511142]: docker network inspect calico-511142: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-511142 not found
	I0912 22:15:51.507264  223454 network_create.go:286] output of [docker network inspect calico-511142]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-511142 not found
	
	** /stderr **
	I0912 22:15:51.507321  223454 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:15:51.524491  223454 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-38edbaf277f1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:05:77:7e:89} reservation:<nil>}
	I0912 22:15:51.525119  223454 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd1ba5635088 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:dc:7c:21:dd} reservation:<nil>}
	I0912 22:15:51.525906  223454 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-fb713f90456f IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:84:8e:96:6c} reservation:<nil>}
	I0912 22:15:51.526461  223454 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ef86beeb6a57 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:df:5c:b7:a4} reservation:<nil>}
	I0912 22:15:51.527144  223454 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010c4a80}
	I0912 22:15:51.527172  223454 network_create.go:123] attempt to create docker network calico-511142 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0912 22:15:51.527215  223454 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-511142 calico-511142
	I0912 22:15:51.579756  223454 network_create.go:107] docker network calico-511142 192.168.85.0/24 created
	I0912 22:15:51.579797  223454 kic.go:117] calculated static IP "192.168.85.2" for the "calico-511142" container
	I0912 22:15:51.579868  223454 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0912 22:15:51.596720  223454 cli_runner.go:164] Run: docker volume create calico-511142 --label name.minikube.sigs.k8s.io=calico-511142 --label created_by.minikube.sigs.k8s.io=true
	I0912 22:15:51.614050  223454 oci.go:103] Successfully created a docker volume calico-511142
	I0912 22:15:51.614145  223454 cli_runner.go:164] Run: docker run --rm --name calico-511142-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-511142 --entrypoint /usr/bin/test -v calico-511142:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -d /var/lib
	I0912 22:15:52.122909  223454 oci.go:107] Successfully prepared a docker volume calico-511142
	I0912 22:15:52.122943  223454 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:15:52.122961  223454 kic.go:190] Starting extracting preloaded images to volume ...
	I0912 22:15:52.123016  223454 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-511142:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir
	I0912 22:15:52.632298  211844 pod_ready.go:102] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:54.633423  211844 pod_ready.go:102] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:55.152227  213173 addons.go:502] enable addons completed in 1.28578599s: enabled=[default-storageclass storage-provisioner]
	I0912 22:15:56.027752  213173 node_ready.go:58] node "kindnet-511142" has status "Ready":"False"
	I0912 22:15:53.839193  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:15:53.839594  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:15:53.839630  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:15:53.839676  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:15:53.898981  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:53.899007  187890 cri.go:89] found id: ""
	I0912 22:15:53.899016  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:15:53.899070  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:15:53.903821  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:15:53.903886  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:15:53.979602  187890 cri.go:89] found id: ""
	I0912 22:15:53.979625  187890 logs.go:284] 0 containers: []
	W0912 22:15:53.979635  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:15:53.979643  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:15:53.979690  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:15:54.018553  187890 cri.go:89] found id: ""
	I0912 22:15:54.018581  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.018588  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:15:54.018594  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:15:54.018644  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:15:54.071283  187890 cri.go:89] found id: ""
	I0912 22:15:54.071310  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.071319  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:15:54.071326  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:15:54.071390  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:15:54.109404  187890 cri.go:89] found id: ""
	I0912 22:15:54.109431  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.109441  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:15:54.109448  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:15:54.109495  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:15:54.165398  187890 cri.go:89] found id: ""
	I0912 22:15:54.165423  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.165432  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:15:54.165439  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:15:54.165492  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:15:54.205991  187890 cri.go:89] found id: ""
	I0912 22:15:54.206016  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.206025  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:15:54.206032  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:15:54.206087  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:15:54.266368  187890 cri.go:89] found id: ""
	I0912 22:15:54.266394  187890 logs.go:284] 0 containers: []
	W0912 22:15:54.266404  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:15:54.266427  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:15:54.266446  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:15:54.283894  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:15:54.283926  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:15:54.357665  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:15:54.357746  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:15:54.357770  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:54.403585  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:15:54.403612  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:15:54.434821  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:15:54.434923  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:15:54.480958  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:15:54.480994  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:15:57.092117  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:15:57.092541  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:15:57.092622  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:15:57.092688  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:15:57.125605  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:57.125633  187890 cri.go:89] found id: ""
	I0912 22:15:57.125641  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:15:57.125701  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:15:57.128997  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:15:57.129065  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:15:57.160064  187890 cri.go:89] found id: ""
	I0912 22:15:57.160087  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.160094  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:15:57.160099  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:15:57.160157  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:15:57.192350  187890 cri.go:89] found id: ""
	I0912 22:15:57.192377  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.192387  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:15:57.192394  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:15:57.192437  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:15:57.223488  187890 cri.go:89] found id: ""
	I0912 22:15:57.223513  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.223520  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:15:57.223526  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:15:57.223577  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:15:57.255398  187890 cri.go:89] found id: ""
	I0912 22:15:57.255424  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.255434  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:15:57.255442  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:15:57.255494  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:15:57.288150  187890 cri.go:89] found id: ""
	I0912 22:15:57.288178  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.288185  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:15:57.288190  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:15:57.288232  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:15:57.322252  187890 cri.go:89] found id: ""
	I0912 22:15:57.322275  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.322281  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:15:57.322287  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:15:57.322340  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:15:57.353894  187890 cri.go:89] found id: ""
	I0912 22:15:57.353922  187890 logs.go:284] 0 containers: []
	W0912 22:15:57.353929  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:15:57.353937  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:15:57.353948  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:15:57.435748  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:15:57.435787  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:15:57.454319  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:15:57.454348  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0912 22:15:57.690890  223454 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-511142:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 -I lz4 -xf /preloaded.tar -C /extractDir: (5.567809565s)
	I0912 22:15:57.690924  223454 kic.go:199] duration metric: took 5.567959 seconds to extract preloaded images to volume
	W0912 22:15:57.691075  223454 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0912 22:15:57.691198  223454 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0912 22:15:57.747180  223454 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-511142 --name calico-511142 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-511142 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-511142 --network calico-511142 --ip 192.168.85.2 --volume calico-511142:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402
	I0912 22:15:58.103350  223454 cli_runner.go:164] Run: docker container inspect calico-511142 --format={{.State.Running}}
	I0912 22:15:58.121597  223454 cli_runner.go:164] Run: docker container inspect calico-511142 --format={{.State.Status}}
	I0912 22:15:58.139925  223454 cli_runner.go:164] Run: docker exec calico-511142 stat /var/lib/dpkg/alternatives/iptables
	I0912 22:15:58.185216  223454 oci.go:144] the created container "calico-511142" has a running status.
	I0912 22:15:58.185247  223454 kic.go:221] Creating ssh key for kic: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa...
	I0912 22:15:58.587587  223454 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0912 22:15:58.607883  223454 cli_runner.go:164] Run: docker container inspect calico-511142 --format={{.State.Status}}
	I0912 22:15:58.626133  223454 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0912 22:15:58.626165  223454 kic_runner.go:114] Args: [docker exec --privileged calico-511142 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0912 22:15:58.729208  223454 cli_runner.go:164] Run: docker container inspect calico-511142 --format={{.State.Status}}
	I0912 22:15:58.751850  223454 machine.go:88] provisioning docker machine ...
	I0912 22:15:58.751891  223454 ubuntu.go:169] provisioning hostname "calico-511142"
	I0912 22:15:58.751944  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:58.779223  223454 main.go:141] libmachine: Using SSH client type: native
	I0912 22:15:58.779752  223454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I0912 22:15:58.779780  223454 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-511142 && echo "calico-511142" | sudo tee /etc/hostname
	I0912 22:15:58.931626  223454 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-511142
	
	I0912 22:15:58.931728  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:58.953164  223454 main.go:141] libmachine: Using SSH client type: native
	I0912 22:15:58.953603  223454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I0912 22:15:58.953638  223454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-511142' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-511142/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-511142' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0912 22:15:59.092468  223454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0912 22:15:59.092495  223454 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/17194-15878/.minikube CaCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17194-15878/.minikube}
	I0912 22:15:59.092525  223454 ubuntu.go:177] setting up certificates
	I0912 22:15:59.092535  223454 provision.go:83] configureAuth start
	I0912 22:15:59.092588  223454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-511142
	I0912 22:15:59.108374  223454 provision.go:138] copyHostCerts
	I0912 22:15:59.108448  223454 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem, removing ...
	I0912 22:15:59.108460  223454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem
	I0912 22:15:59.108536  223454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/ca.pem (1082 bytes)
	I0912 22:15:59.108661  223454 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem, removing ...
	I0912 22:15:59.108673  223454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem
	I0912 22:15:59.108704  223454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/cert.pem (1123 bytes)
	I0912 22:15:59.108770  223454 exec_runner.go:144] found /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem, removing ...
	I0912 22:15:59.108780  223454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem
	I0912 22:15:59.108803  223454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17194-15878/.minikube/key.pem (1679 bytes)
	I0912 22:15:59.108860  223454 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem org=jenkins.calico-511142 san=[192.168.85.2 127.0.0.1 localhost 127.0.0.1 minikube calico-511142]
	I0912 22:15:59.276858  223454 provision.go:172] copyRemoteCerts
	I0912 22:15:59.276910  223454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0912 22:15:59.276942  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:59.293592  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:15:59.393073  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0912 22:15:59.415545  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0912 22:15:59.437272  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0912 22:15:59.459930  223454 provision.go:86] duration metric: configureAuth took 367.37612ms
	I0912 22:15:59.459962  223454 ubuntu.go:193] setting minikube options for container-runtime
	I0912 22:15:59.460145  223454 config.go:182] Loaded profile config "calico-511142": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:15:59.460253  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:59.476646  223454 main.go:141] libmachine: Using SSH client type: native
	I0912 22:15:59.476970  223454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 127.0.0.1 32999 <nil> <nil>}
	I0912 22:15:59.476991  223454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0912 22:15:59.703236  223454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0912 22:15:59.703259  223454 machine.go:91] provisioned docker machine in 951.381638ms
	I0912 22:15:59.703268  223454 client.go:171] LocalClient.Create took 8.228519867s
	I0912 22:15:59.703287  223454 start.go:167] duration metric: libmachine.API.Create for "calico-511142" took 8.228594993s
	I0912 22:15:59.703293  223454 start.go:300] post-start starting for "calico-511142" (driver="docker")
	I0912 22:15:59.703302  223454 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0912 22:15:59.703364  223454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0912 22:15:59.703399  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:59.720749  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:15:59.817592  223454 ssh_runner.go:195] Run: cat /etc/os-release
	I0912 22:15:59.820643  223454 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0912 22:15:59.820687  223454 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0912 22:15:59.820706  223454 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0912 22:15:59.820718  223454 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I0912 22:15:59.820731  223454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/addons for local assets ...
	I0912 22:15:59.820791  223454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17194-15878/.minikube/files for local assets ...
	I0912 22:15:59.820878  223454 filesync.go:149] local asset: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem -> 226982.pem in /etc/ssl/certs
	I0912 22:15:59.820988  223454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0912 22:15:59.828482  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:15:59.849536  223454 start.go:303] post-start completed in 146.230184ms
	I0912 22:15:59.849898  223454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-511142
	I0912 22:15:59.867131  223454 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/config.json ...
	I0912 22:15:59.867360  223454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:15:59.867403  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:15:59.884236  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:15:59.981061  223454 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0912 22:15:59.984937  223454 start.go:128] duration metric: createHost completed in 8.512472758s
	I0912 22:15:59.984963  223454 start.go:83] releasing machines lock for "calico-511142", held for 8.512642264s
	I0912 22:15:59.985018  223454 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-511142
	I0912 22:16:00.003158  223454 ssh_runner.go:195] Run: cat /version.json
	I0912 22:16:00.003209  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:16:00.003218  223454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0912 22:16:00.003271  223454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-511142
	I0912 22:16:00.023617  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:16:00.024000  223454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32999 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/calico-511142/id_rsa Username:docker}
	I0912 22:16:00.119919  223454 ssh_runner.go:195] Run: systemctl --version
	I0912 22:16:00.215356  223454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0912 22:16:00.356813  223454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0912 22:16:00.361476  223454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:16:00.379092  223454 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0912 22:16:00.379183  223454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0912 22:16:00.408005  223454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0912 22:16:00.408030  223454 start.go:469] detecting cgroup driver to use...
	I0912 22:16:00.408063  223454 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0912 22:16:00.408105  223454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0912 22:16:00.422939  223454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0912 22:16:00.433621  223454 docker.go:196] disabling cri-docker service (if available) ...
	I0912 22:16:00.433683  223454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0912 22:16:00.445886  223454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0912 22:16:00.461741  223454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0912 22:16:00.548867  223454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0912 22:16:00.633463  223454 docker.go:212] disabling docker service ...
	I0912 22:16:00.633509  223454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0912 22:16:00.652794  223454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0912 22:16:00.663363  223454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0912 22:16:00.737058  223454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0912 22:16:00.820777  223454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0912 22:16:00.831652  223454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0912 22:16:00.846472  223454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0912 22:16:00.846522  223454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:16:00.855442  223454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0912 22:16:00.855494  223454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:16:00.864325  223454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:16:00.873760  223454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0912 22:16:00.882467  223454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0912 22:16:00.890971  223454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0912 22:16:00.898251  223454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0912 22:16:00.906302  223454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0912 22:16:00.981328  223454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0912 22:16:01.098268  223454 start.go:516] Will wait 60s for socket path /var/run/crio/crio.sock
	I0912 22:16:01.098336  223454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0912 22:16:01.101887  223454 start.go:537] Will wait 60s for crictl version
	I0912 22:16:01.101937  223454 ssh_runner.go:195] Run: which crictl
	I0912 22:16:01.105140  223454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0912 22:16:01.137622  223454 start.go:553] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0912 22:16:01.137696  223454 ssh_runner.go:195] Run: crio --version
	I0912 22:16:01.170217  223454 ssh_runner.go:195] Run: crio --version
	I0912 22:16:01.206488  223454 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.6 ...
	I0912 22:16:01.207796  223454 cli_runner.go:164] Run: docker network inspect calico-511142 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0912 22:16:01.223787  223454 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0912 22:16:01.227375  223454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:16:01.238220  223454 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0912 22:16:01.238273  223454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:15:56.678849  211844 pod_ready.go:92] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"True"
	I0912 22:15:56.678872  211844 pod_ready.go:81] duration metric: took 8.123020603s waiting for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:15:56.678882  211844 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:15:58.694156  211844 pod_ready.go:102] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"False"
	I0912 22:16:01.194368  211844 pod_ready.go:102] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"False"
	I0912 22:15:58.027941  213173 node_ready.go:58] node "kindnet-511142" has status "Ready":"False"
	I0912 22:15:59.028166  213173 node_ready.go:49] node "kindnet-511142" has status "Ready":"True"
	I0912 22:15:59.028200  213173 node_ready.go:38] duration metric: took 5.047863011s waiting for node "kindnet-511142" to be "Ready" ...
	I0912 22:15:59.028212  213173 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:15:59.038795  213173 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-k62tg" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.060396  213173 pod_ready.go:92] pod "coredns-5dd5756b68-k62tg" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.060423  213173 pod_ready.go:81] duration metric: took 1.02159687s waiting for pod "coredns-5dd5756b68-k62tg" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.060436  213173 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.065729  213173 pod_ready.go:92] pod "etcd-kindnet-511142" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.065751  213173 pod_ready.go:81] duration metric: took 5.308736ms waiting for pod "etcd-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.065763  213173 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.125812  213173 pod_ready.go:92] pod "kube-apiserver-kindnet-511142" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.125850  213173 pod_ready.go:81] duration metric: took 60.081242ms waiting for pod "kube-apiserver-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.125860  213173 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.228806  213173 pod_ready.go:92] pod "kube-controller-manager-kindnet-511142" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.228830  213173 pod_ready.go:81] duration metric: took 102.962871ms waiting for pod "kube-controller-manager-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.228843  213173 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-nwvr2" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.628765  213173 pod_ready.go:92] pod "kube-proxy-nwvr2" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:00.628788  213173 pod_ready.go:81] duration metric: took 399.937625ms waiting for pod "kube-proxy-nwvr2" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:00.628797  213173 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:01.027279  213173 pod_ready.go:92] pod "kube-scheduler-kindnet-511142" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:01.027299  213173 pod_ready.go:81] duration metric: took 398.495291ms waiting for pod "kube-scheduler-kindnet-511142" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:01.027308  213173 pod_ready.go:38] duration metric: took 1.999084417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:01.027322  213173 api_server.go:52] waiting for apiserver process to appear ...
	I0912 22:16:01.027364  213173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:16:01.038523  213173 api_server.go:72] duration metric: took 7.138463939s to wait for apiserver process to appear ...
	I0912 22:16:01.038548  213173 api_server.go:88] waiting for apiserver healthz status ...
	I0912 22:16:01.038566  213173 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0912 22:16:01.042683  213173 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0912 22:16:01.043753  213173 api_server.go:141] control plane version: v1.28.1
	I0912 22:16:01.043776  213173 api_server.go:131] duration metric: took 5.220452ms to wait for apiserver health ...
	I0912 22:16:01.043789  213173 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:16:01.230750  213173 system_pods.go:59] 8 kube-system pods found
	I0912 22:16:01.230774  213173 system_pods.go:61] "coredns-5dd5756b68-k62tg" [93ad447d-f782-4de2-845a-ecf4d1dc614e] Running
	I0912 22:16:01.230779  213173 system_pods.go:61] "etcd-kindnet-511142" [c5b7e8fa-05b6-435e-a041-5e62ebc70550] Running
	I0912 22:16:01.230784  213173 system_pods.go:61] "kindnet-rm5qw" [18b6f27d-5f3d-4d34-9686-d24bb3d27c25] Running
	I0912 22:16:01.230788  213173 system_pods.go:61] "kube-apiserver-kindnet-511142" [09bf7a95-529c-4d2f-aad2-2f1736da3202] Running
	I0912 22:16:01.230792  213173 system_pods.go:61] "kube-controller-manager-kindnet-511142" [0687342a-6101-4597-a611-627efc1ebac2] Running
	I0912 22:16:01.230796  213173 system_pods.go:61] "kube-proxy-nwvr2" [66707d9a-d499-449d-acb3-166500397ddd] Running
	I0912 22:16:01.230799  213173 system_pods.go:61] "kube-scheduler-kindnet-511142" [c22d8c1a-0139-4eb2-8e38-72ae60604c19] Running
	I0912 22:16:01.230803  213173 system_pods.go:61] "storage-provisioner" [447aa53e-da19-44e7-9b34-eb37f75c156e] Running
	I0912 22:16:01.230808  213173 system_pods.go:74] duration metric: took 187.013363ms to wait for pod list to return data ...
	I0912 22:16:01.230819  213173 default_sa.go:34] waiting for default service account to be created ...
	I0912 22:16:01.427800  213173 default_sa.go:45] found service account: "default"
	I0912 22:16:01.427828  213173 default_sa.go:55] duration metric: took 197.002262ms for default service account to be created ...
	I0912 22:16:01.427841  213173 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 22:16:01.631451  213173 system_pods.go:86] 8 kube-system pods found
	I0912 22:16:01.631481  213173 system_pods.go:89] "coredns-5dd5756b68-k62tg" [93ad447d-f782-4de2-845a-ecf4d1dc614e] Running
	I0912 22:16:01.631489  213173 system_pods.go:89] "etcd-kindnet-511142" [c5b7e8fa-05b6-435e-a041-5e62ebc70550] Running
	I0912 22:16:01.631496  213173 system_pods.go:89] "kindnet-rm5qw" [18b6f27d-5f3d-4d34-9686-d24bb3d27c25] Running
	I0912 22:16:01.631503  213173 system_pods.go:89] "kube-apiserver-kindnet-511142" [09bf7a95-529c-4d2f-aad2-2f1736da3202] Running
	I0912 22:16:01.631510  213173 system_pods.go:89] "kube-controller-manager-kindnet-511142" [0687342a-6101-4597-a611-627efc1ebac2] Running
	I0912 22:16:01.631522  213173 system_pods.go:89] "kube-proxy-nwvr2" [66707d9a-d499-449d-acb3-166500397ddd] Running
	I0912 22:16:01.631532  213173 system_pods.go:89] "kube-scheduler-kindnet-511142" [c22d8c1a-0139-4eb2-8e38-72ae60604c19] Running
	I0912 22:16:01.631539  213173 system_pods.go:89] "storage-provisioner" [447aa53e-da19-44e7-9b34-eb37f75c156e] Running
	I0912 22:16:01.631550  213173 system_pods.go:126] duration metric: took 203.7028ms to wait for k8s-apps to be running ...
	I0912 22:16:01.631562  213173 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 22:16:01.631618  213173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:16:01.642347  213173 system_svc.go:56] duration metric: took 10.772442ms WaitForService to wait for kubelet.
	I0912 22:16:01.642369  213173 kubeadm.go:581] duration metric: took 7.742318263s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 22:16:01.642385  213173 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:16:01.827823  213173 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:16:01.827843  213173 node_conditions.go:123] node cpu capacity is 8
	I0912 22:16:01.827854  213173 node_conditions.go:105] duration metric: took 185.463681ms to run NodePressure ...
	I0912 22:16:01.827864  213173 start.go:228] waiting for startup goroutines ...
	I0912 22:16:01.827870  213173 start.go:233] waiting for cluster config update ...
	I0912 22:16:01.827880  213173 start.go:242] writing updated cluster config ...
	I0912 22:16:01.828111  213173 ssh_runner.go:195] Run: rm -f paused
	I0912 22:16:01.887836  213173 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0912 22:16:01.890220  213173 out.go:177] * Done! kubectl is now configured to use "kindnet-511142" cluster and "default" namespace by default
	W0912 22:15:57.510643  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:15:57.510673  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:15:57.510690  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:15:57.563318  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:15:57.563349  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:15:57.590766  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:15:57.590796  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:16:00.126306  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:16:00.126679  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:16:00.126726  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:16:00.126775  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:16:00.158629  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:16:00.158658  187890 cri.go:89] found id: ""
	I0912 22:16:00.158667  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:16:00.158716  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:16:00.161961  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:16:00.162021  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:16:00.194254  187890 cri.go:89] found id: ""
	I0912 22:16:00.194278  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.194286  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:16:00.194294  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:16:00.194350  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:16:00.226449  187890 cri.go:89] found id: ""
	I0912 22:16:00.226475  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.226484  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:16:00.226492  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:16:00.226550  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:16:00.265385  187890 cri.go:89] found id: ""
	I0912 22:16:00.265407  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.265419  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:16:00.265426  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:16:00.265470  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:16:00.297646  187890 cri.go:89] found id: ""
	I0912 22:16:00.297675  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.297687  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:16:00.297696  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:16:00.297749  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:16:00.329347  187890 cri.go:89] found id: ""
	I0912 22:16:00.329370  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.329376  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:16:00.329383  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:16:00.329426  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:16:00.362045  187890 cri.go:89] found id: ""
	I0912 22:16:00.362066  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.362076  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:16:00.362083  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:16:00.362131  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:16:00.399017  187890 cri.go:89] found id: ""
	I0912 22:16:00.399044  187890 logs.go:284] 0 containers: []
	W0912 22:16:00.399054  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:16:00.399064  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:16:00.399075  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:16:00.426030  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:16:00.426060  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:16:00.466834  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:16:00.466855  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:16:00.573423  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:16:00.573457  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:16:00.592387  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:16:00.592425  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:16:00.652225  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:16:00.652248  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:16:00.652259  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:16:02.694094  211844 pod_ready.go:92] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.694127  211844 pod_ready.go:81] duration metric: took 6.015238147s waiting for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.694143  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.699044  211844 pod_ready.go:92] pod "kube-apiserver-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.699066  211844 pod_ready.go:81] duration metric: took 4.915199ms waiting for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.699078  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.703925  211844 pod_ready.go:92] pod "kube-controller-manager-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.703944  211844 pod_ready.go:81] duration metric: took 4.859474ms waiting for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.703954  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.708650  211844 pod_ready.go:92] pod "kube-proxy-z2hh7" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.708666  211844 pod_ready.go:81] duration metric: took 4.706239ms waiting for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.708673  211844 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.713488  211844 pod_ready.go:92] pod "kube-scheduler-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:02.713505  211844 pod_ready.go:81] duration metric: took 4.826823ms waiting for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:02.713512  211844 pod_ready.go:38] duration metric: took 14.163791242s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:02.713528  211844 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0912 22:16:02.720933  211844 ops.go:34] apiserver oom_adj: -16
	I0912 22:16:02.720954  211844 kubeadm.go:640] restartCluster took 32.5040247s
	I0912 22:16:02.720964  211844 kubeadm.go:406] StartCluster complete in 32.581463145s
	I0912 22:16:02.720984  211844 settings.go:142] acquiring lock: {Name:mk27d6c9e2209c1484da49df89f359f1b22a9261 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.721056  211844 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:16:02.722576  211844 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/kubeconfig: {Name:mk41a52745552a5cecc3511e6da68b50fcd6941f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.722870  211844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0912 22:16:02.722967  211844 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0912 22:16:02.725245  211844 out.go:177] * Enabled addons: 
	I0912 22:16:02.723128  211844 config.go:182] Loaded profile config "pause-959901": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:16:02.723914  211844 kapi.go:59] client config for pause-959901: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/client.crt", KeyFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/profiles/pause-959901/client.key", CAFile:"/home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]stri
ng(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c15e60), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0912 22:16:02.726698  211844 addons.go:502] enable addons completed in 3.731707ms: enabled=[]
	I0912 22:16:02.729845  211844 kapi.go:248] "coredns" deployment in "kube-system" namespace and "pause-959901" context rescaled to 1 replicas
	I0912 22:16:02.729874  211844 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0912 22:16:02.731435  211844 out.go:177] * Verifying Kubernetes components...
	I0912 22:16:01.291455  223454 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 22:16:01.291474  223454 crio.go:415] Images already preloaded, skipping extraction
	I0912 22:16:01.291513  223454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0912 22:16:01.323370  223454 crio.go:496] all images are preloaded for cri-o runtime.
	I0912 22:16:01.323394  223454 cache_images.go:84] Images are preloaded, skipping loading
	I0912 22:16:01.323443  223454 ssh_runner.go:195] Run: crio config
	I0912 22:16:01.366097  223454 cni.go:84] Creating CNI manager for "calico"
	I0912 22:16:01.366129  223454 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0912 22:16:01.366153  223454 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-511142 NodeName:calico-511142 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0912 22:16:01.366280  223454 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-511142"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0912 22:16:01.366344  223454 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=calico-511142 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:calico-511142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0912 22:16:01.366390  223454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0912 22:16:01.374632  223454 binaries.go:44] Found k8s binaries, skipping transfer
	I0912 22:16:01.374695  223454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0912 22:16:01.382385  223454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (423 bytes)
	I0912 22:16:01.398278  223454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0912 22:16:01.414100  223454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2094 bytes)
	I0912 22:16:01.430265  223454 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0912 22:16:01.433269  223454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0912 22:16:01.443019  223454 certs.go:56] Setting up /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142 for IP: 192.168.85.2
	I0912 22:16:01.443049  223454 certs.go:190] acquiring lock for shared ca certs: {Name:mk61327f1fa12512fba6a15661f030034d23bf2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.443183  223454 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key
	I0912 22:16:01.443236  223454 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key
	I0912 22:16:01.443290  223454 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.key
	I0912 22:16:01.443319  223454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt with IP's: []
	I0912 22:16:01.676654  223454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt ...
	I0912 22:16:01.676680  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: {Name:mka45ef1b913de9346a5f19fd570d11dafcf85f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.676871  223454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.key ...
	I0912 22:16:01.676885  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.key: {Name:mk36dc012b44c1fe4138f5dbcd529788548f871c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.676984  223454 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key.43b9df8c
	I0912 22:16:01.676999  223454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt.43b9df8c with IP's: [192.168.85.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0912 22:16:01.825408  223454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt.43b9df8c ...
	I0912 22:16:01.825436  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt.43b9df8c: {Name:mk3fcc74426c6993822e92b0b60a55a5f0c47cd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.825612  223454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key.43b9df8c ...
	I0912 22:16:01.825627  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key.43b9df8c: {Name:mk53f7083d604b80843316125be439f6e63da4f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:01.825725  223454 certs.go:337] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt.43b9df8c -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt
	I0912 22:16:01.825814  223454 certs.go:341] copying /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key.43b9df8c -> /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key
	I0912 22:16:01.825870  223454 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.key
	I0912 22:16:01.825881  223454 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.crt with IP's: []
	I0912 22:16:02.097739  223454 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.crt ...
	I0912 22:16:02.097768  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.crt: {Name:mkd23ffd8017921e36ac4c9139acd85c8d83a9b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.097920  223454 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.key ...
	I0912 22:16:02.097930  223454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.key: {Name:mk87a5f3fa3343ea9b4e3fc9451edc2267b7186e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 22:16:02.098080  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem (1338 bytes)
	W0912 22:16:02.098119  223454 certs.go:433] ignoring /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698_empty.pem, impossibly tiny 0 bytes
	I0912 22:16:02.098134  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca-key.pem (1675 bytes)
	I0912 22:16:02.098170  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/ca.pem (1082 bytes)
	I0912 22:16:02.098196  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/cert.pem (1123 bytes)
	I0912 22:16:02.098219  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/certs/home/jenkins/minikube-integration/17194-15878/.minikube/certs/key.pem (1679 bytes)
	I0912 22:16:02.098255  223454 certs.go:437] found cert: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem (1708 bytes)
	I0912 22:16:02.098796  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0912 22:16:02.120937  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0912 22:16:02.142109  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0912 22:16:02.162535  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0912 22:16:02.183284  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0912 22:16:02.204528  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0912 22:16:02.225679  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0912 22:16:02.246630  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0912 22:16:02.267764  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/certs/22698.pem --> /usr/share/ca-certificates/22698.pem (1338 bytes)
	I0912 22:16:02.288498  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/ssl/certs/226982.pem --> /usr/share/ca-certificates/226982.pem (1708 bytes)
	I0912 22:16:02.308780  223454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0912 22:16:02.329046  223454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0912 22:16:02.344242  223454 ssh_runner.go:195] Run: openssl version
	I0912 22:16:02.349060  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22698.pem && ln -fs /usr/share/ca-certificates/22698.pem /etc/ssl/certs/22698.pem"
	I0912 22:16:02.357102  223454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22698.pem
	I0912 22:16:02.360258  223454 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 12 21:49 /usr/share/ca-certificates/22698.pem
	I0912 22:16:02.360303  223454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22698.pem
	I0912 22:16:02.366482  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22698.pem /etc/ssl/certs/51391683.0"
	I0912 22:16:02.374337  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/226982.pem && ln -fs /usr/share/ca-certificates/226982.pem /etc/ssl/certs/226982.pem"
	I0912 22:16:02.382344  223454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/226982.pem
	I0912 22:16:02.385201  223454 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 12 21:49 /usr/share/ca-certificates/226982.pem
	I0912 22:16:02.385233  223454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/226982.pem
	I0912 22:16:02.391297  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/226982.pem /etc/ssl/certs/3ec20f2e.0"
	I0912 22:16:02.400524  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0912 22:16:02.409907  223454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:16:02.413670  223454 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 12 21:44 /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:16:02.413728  223454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0912 22:16:02.420633  223454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0912 22:16:02.428832  223454 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0912 22:16:02.431608  223454 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0912 22:16:02.431653  223454 kubeadm.go:404] StartCluster: {Name:calico-511142 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:calico-511142 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMe
trics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 22:16:02.431728  223454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0912 22:16:02.431781  223454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0912 22:16:02.464318  223454 cri.go:89] found id: ""
	I0912 22:16:02.464382  223454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0912 22:16:02.472448  223454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0912 22:16:02.480340  223454 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0912 22:16:02.480406  223454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0912 22:16:02.487841  223454 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0912 22:16:02.487879  223454 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0912 22:16:02.532467  223454 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0912 22:16:02.532535  223454 kubeadm.go:322] [preflight] Running pre-flight checks
	I0912 22:16:02.566295  223454 kubeadm.go:322] [preflight] The system verification failed. Printing the output from the verification:
	I0912 22:16:02.566391  223454 kubeadm.go:322] KERNEL_VERSION: 5.15.0-1041-gcp
	I0912 22:16:02.566426  223454 kubeadm.go:322] OS: Linux
	I0912 22:16:02.566479  223454 kubeadm.go:322] CGROUPS_CPU: enabled
	I0912 22:16:02.566526  223454 kubeadm.go:322] CGROUPS_CPUACCT: enabled
	I0912 22:16:02.566574  223454 kubeadm.go:322] CGROUPS_CPUSET: enabled
	I0912 22:16:02.566637  223454 kubeadm.go:322] CGROUPS_DEVICES: enabled
	I0912 22:16:02.566699  223454 kubeadm.go:322] CGROUPS_FREEZER: enabled
	I0912 22:16:02.566767  223454 kubeadm.go:322] CGROUPS_MEMORY: enabled
	I0912 22:16:02.566836  223454 kubeadm.go:322] CGROUPS_PIDS: enabled
	I0912 22:16:02.566893  223454 kubeadm.go:322] CGROUPS_HUGETLB: enabled
	I0912 22:16:02.566963  223454 kubeadm.go:322] CGROUPS_BLKIO: enabled
	I0912 22:16:02.628115  223454 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0912 22:16:02.628255  223454 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0912 22:16:02.628414  223454 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0912 22:16:02.838383  223454 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0912 22:16:02.840917  223454 out.go:204]   - Generating certificates and keys ...
	I0912 22:16:02.841079  223454 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0912 22:16:02.841167  223454 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0912 22:16:02.905532  223454 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0912 22:16:03.044965  223454 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0912 22:16:03.264971  223454 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0912 22:16:03.548782  223454 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0912 22:16:03.643129  223454 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0912 22:16:03.643261  223454 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [calico-511142 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0912 22:16:03.828995  223454 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0912 22:16:03.829155  223454 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [calico-511142 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0912 22:16:04.093189  223454 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0912 22:16:04.205006  223454 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0912 22:16:04.452072  223454 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0912 22:16:04.452272  223454 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0912 22:16:04.688079  223454 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0912 22:16:05.053111  223454 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0912 22:16:05.166787  223454 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0912 22:16:05.226784  223454 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0912 22:16:05.227967  223454 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0912 22:16:05.230252  223454 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0912 22:16:05.233130  223454 out.go:204]   - Booting up control plane ...
	I0912 22:16:05.233267  223454 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0912 22:16:05.233372  223454 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0912 22:16:05.233450  223454 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0912 22:16:05.241218  223454 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0912 22:16:05.242019  223454 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0912 22:16:05.242089  223454 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0912 22:16:05.322855  223454 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0912 22:16:02.732822  211844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:16:02.799271  211844 start.go:890] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0912 22:16:02.799264  211844 node_ready.go:35] waiting up to 6m0s for node "pause-959901" to be "Ready" ...
	I0912 22:16:02.891550  211844 node_ready.go:49] node "pause-959901" has status "Ready":"True"
	I0912 22:16:02.891581  211844 node_ready.go:38] duration metric: took 92.274025ms waiting for node "pause-959901" to be "Ready" ...
	I0912 22:16:02.891594  211844 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:03.094209  211844 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.491973  211844 pod_ready.go:92] pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:03.491996  211844 pod_ready.go:81] duration metric: took 397.761263ms waiting for pod "coredns-5dd5756b68-mtzsr" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.492009  211844 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.891703  211844 pod_ready.go:92] pod "etcd-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:03.891726  211844 pod_ready.go:81] duration metric: took 399.709656ms waiting for pod "etcd-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:03.891739  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.291560  211844 pod_ready.go:92] pod "kube-apiserver-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:04.291593  211844 pod_ready.go:81] duration metric: took 399.843007ms waiting for pod "kube-apiserver-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.291607  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.691770  211844 pod_ready.go:92] pod "kube-controller-manager-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:04.691795  211844 pod_ready.go:81] duration metric: took 400.178718ms waiting for pod "kube-controller-manager-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:04.691809  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.091401  211844 pod_ready.go:92] pod "kube-proxy-z2hh7" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:05.091421  211844 pod_ready.go:81] duration metric: took 399.605265ms waiting for pod "kube-proxy-z2hh7" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.091435  211844 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.492133  211844 pod_ready.go:92] pod "kube-scheduler-pause-959901" in "kube-system" namespace has status "Ready":"True"
	I0912 22:16:05.492156  211844 pod_ready.go:81] duration metric: took 400.714089ms waiting for pod "kube-scheduler-pause-959901" in "kube-system" namespace to be "Ready" ...
	I0912 22:16:05.492172  211844 pod_ready.go:38] duration metric: took 2.600567658s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0912 22:16:05.492191  211844 api_server.go:52] waiting for apiserver process to appear ...
	I0912 22:16:05.492239  211844 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:16:05.502268  211844 api_server.go:72] duration metric: took 2.772365249s to wait for apiserver process to appear ...
	I0912 22:16:05.502290  211844 api_server.go:88] waiting for apiserver healthz status ...
	I0912 22:16:05.502312  211844 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0912 22:16:05.506460  211844 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0912 22:16:05.507747  211844 api_server.go:141] control plane version: v1.28.1
	I0912 22:16:05.507769  211844 api_server.go:131] duration metric: took 5.470962ms to wait for apiserver health ...
	I0912 22:16:05.508067  211844 system_pods.go:43] waiting for kube-system pods to appear ...
	I0912 22:16:05.694102  211844 system_pods.go:59] 7 kube-system pods found
	I0912 22:16:05.694139  211844 system_pods.go:61] "coredns-5dd5756b68-mtzsr" [ebce215d-39b5-449a-9c8f-67054a18fabf] Running
	I0912 22:16:05.694147  211844 system_pods.go:61] "etcd-pause-959901" [8bc25b38-213d-4e32-a67c-455ecf7c8b01] Running
	I0912 22:16:05.694154  211844 system_pods.go:61] "kindnet-km9nv" [d59bdd92-bd6e-408a-a28a-dbd1255077a8] Running
	I0912 22:16:05.694160  211844 system_pods.go:61] "kube-apiserver-pause-959901" [6c258963-d39e-43c1-99fc-23e16363ad27] Running
	I0912 22:16:05.694168  211844 system_pods.go:61] "kube-controller-manager-pause-959901" [08bdf00b-3dde-49a2-9182-56c41bcdf5e6] Running
	I0912 22:16:05.694175  211844 system_pods.go:61] "kube-proxy-z2hh7" [9a0e46a6-3795-4959-8b48-576a02252969] Running
	I0912 22:16:05.694179  211844 system_pods.go:61] "kube-scheduler-pause-959901" [704134f0-db48-4df5-a579-29ec78d00c2b] Running
	I0912 22:16:05.694186  211844 system_pods.go:74] duration metric: took 186.08578ms to wait for pod list to return data ...
	I0912 22:16:05.694197  211844 default_sa.go:34] waiting for default service account to be created ...
	I0912 22:16:05.891256  211844 default_sa.go:45] found service account: "default"
	I0912 22:16:05.891286  211844 default_sa.go:55] duration metric: took 197.076725ms for default service account to be created ...
	I0912 22:16:05.891298  211844 system_pods.go:116] waiting for k8s-apps to be running ...
	I0912 22:16:06.093850  211844 system_pods.go:86] 7 kube-system pods found
	I0912 22:16:06.093878  211844 system_pods.go:89] "coredns-5dd5756b68-mtzsr" [ebce215d-39b5-449a-9c8f-67054a18fabf] Running
	I0912 22:16:06.093883  211844 system_pods.go:89] "etcd-pause-959901" [8bc25b38-213d-4e32-a67c-455ecf7c8b01] Running
	I0912 22:16:06.093888  211844 system_pods.go:89] "kindnet-km9nv" [d59bdd92-bd6e-408a-a28a-dbd1255077a8] Running
	I0912 22:16:06.093892  211844 system_pods.go:89] "kube-apiserver-pause-959901" [6c258963-d39e-43c1-99fc-23e16363ad27] Running
	I0912 22:16:06.093896  211844 system_pods.go:89] "kube-controller-manager-pause-959901" [08bdf00b-3dde-49a2-9182-56c41bcdf5e6] Running
	I0912 22:16:06.093901  211844 system_pods.go:89] "kube-proxy-z2hh7" [9a0e46a6-3795-4959-8b48-576a02252969] Running
	I0912 22:16:06.093905  211844 system_pods.go:89] "kube-scheduler-pause-959901" [704134f0-db48-4df5-a579-29ec78d00c2b] Running
	I0912 22:16:06.093912  211844 system_pods.go:126] duration metric: took 202.60896ms to wait for k8s-apps to be running ...
	I0912 22:16:06.093921  211844 system_svc.go:44] waiting for kubelet service to be running ....
	I0912 22:16:06.093960  211844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:16:06.105434  211844 system_svc.go:56] duration metric: took 11.502861ms WaitForService to wait for kubelet.
	I0912 22:16:06.105462  211844 kubeadm.go:581] duration metric: took 3.375565082s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0912 22:16:06.105484  211844 node_conditions.go:102] verifying NodePressure condition ...
	I0912 22:16:06.291989  211844 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0912 22:16:06.292012  211844 node_conditions.go:123] node cpu capacity is 8
	I0912 22:16:06.292022  211844 node_conditions.go:105] duration metric: took 186.533943ms to run NodePressure ...
	I0912 22:16:06.292033  211844 start.go:228] waiting for startup goroutines ...
	I0912 22:16:06.292039  211844 start.go:233] waiting for cluster config update ...
	I0912 22:16:06.292047  211844 start.go:242] writing updated cluster config ...
	I0912 22:16:06.292303  211844 ssh_runner.go:195] Run: rm -f paused
	I0912 22:16:06.354421  211844 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0912 22:16:06.357076  211844 out.go:177] * Done! kubectl is now configured to use "pause-959901" cluster and "default" namespace by default
	I0912 22:16:03.190323  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:16:03.190726  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:16:03.190776  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:16:03.190827  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:16:03.224256  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:16:03.224279  187890 cri.go:89] found id: ""
	I0912 22:16:03.224286  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:16:03.224338  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:16:03.227711  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:16:03.227773  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:16:03.260193  187890 cri.go:89] found id: ""
	I0912 22:16:03.260216  187890 logs.go:284] 0 containers: []
	W0912 22:16:03.260225  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:16:03.260236  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:16:03.260291  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:16:03.293914  187890 cri.go:89] found id: ""
	I0912 22:16:03.293941  187890 logs.go:284] 0 containers: []
	W0912 22:16:03.293947  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:16:03.293954  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:16:03.294005  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:16:03.326288  187890 cri.go:89] found id: ""
	I0912 22:16:03.326311  187890 logs.go:284] 0 containers: []
	W0912 22:16:03.326317  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:16:03.326323  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:16:03.326375  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:16:03.359248  187890 cri.go:89] found id: ""
	I0912 22:16:03.359279  187890 logs.go:284] 0 containers: []
	W0912 22:16:03.359288  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:16:03.359296  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:16:03.359351  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:16:03.392231  187890 cri.go:89] found id: ""
	I0912 22:16:03.392256  187890 logs.go:284] 0 containers: []
	W0912 22:16:03.392263  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:16:03.392270  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:16:03.392353  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:16:03.424933  187890 cri.go:89] found id: ""
	I0912 22:16:03.424954  187890 logs.go:284] 0 containers: []
	W0912 22:16:03.424961  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:16:03.424966  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:16:03.425007  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:16:03.458326  187890 cri.go:89] found id: ""
	I0912 22:16:03.458354  187890 logs.go:284] 0 containers: []
	W0912 22:16:03.458362  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:16:03.458370  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:16:03.458381  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:16:03.542835  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:16:03.542869  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0912 22:16:03.562789  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:16:03.562824  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:16:03.619849  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:16:03.619872  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:16:03.619885  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:16:03.656667  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:16:03.656696  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:16:03.681718  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:16:03.681750  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:16:06.217896  187890 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0912 22:16:06.218372  187890 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I0912 22:16:06.218417  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0912 22:16:06.218462  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0912 22:16:06.266326  187890 cri.go:89] found id: "dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:16:06.266346  187890 cri.go:89] found id: ""
	I0912 22:16:06.266354  187890 logs.go:284] 1 containers: [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f]
	I0912 22:16:06.266395  187890 ssh_runner.go:195] Run: which crictl
	I0912 22:16:06.269941  187890 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0912 22:16:06.269989  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0912 22:16:06.304638  187890 cri.go:89] found id: ""
	I0912 22:16:06.304665  187890 logs.go:284] 0 containers: []
	W0912 22:16:06.304672  187890 logs.go:286] No container was found matching "etcd"
	I0912 22:16:06.304678  187890 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0912 22:16:06.304734  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0912 22:16:06.355126  187890 cri.go:89] found id: ""
	I0912 22:16:06.355147  187890 logs.go:284] 0 containers: []
	W0912 22:16:06.355156  187890 logs.go:286] No container was found matching "coredns"
	I0912 22:16:06.355163  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0912 22:16:06.355223  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0912 22:16:06.403029  187890 cri.go:89] found id: ""
	I0912 22:16:06.403053  187890 logs.go:284] 0 containers: []
	W0912 22:16:06.403062  187890 logs.go:286] No container was found matching "kube-scheduler"
	I0912 22:16:06.403070  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0912 22:16:06.403139  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0912 22:16:06.449147  187890 cri.go:89] found id: ""
	I0912 22:16:06.449177  187890 logs.go:284] 0 containers: []
	W0912 22:16:06.449187  187890 logs.go:286] No container was found matching "kube-proxy"
	I0912 22:16:06.449194  187890 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0912 22:16:06.449251  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0912 22:16:06.495661  187890 cri.go:89] found id: ""
	I0912 22:16:06.495690  187890 logs.go:284] 0 containers: []
	W0912 22:16:06.495697  187890 logs.go:286] No container was found matching "kube-controller-manager"
	I0912 22:16:06.495703  187890 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0912 22:16:06.495743  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0912 22:16:06.541760  187890 cri.go:89] found id: ""
	I0912 22:16:06.541784  187890 logs.go:284] 0 containers: []
	W0912 22:16:06.541794  187890 logs.go:286] No container was found matching "kindnet"
	I0912 22:16:06.541803  187890 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0912 22:16:06.541848  187890 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0912 22:16:06.591636  187890 cri.go:89] found id: ""
	I0912 22:16:06.591655  187890 logs.go:284] 0 containers: []
	W0912 22:16:06.591661  187890 logs.go:286] No container was found matching "storage-provisioner"
	I0912 22:16:06.591669  187890 logs.go:123] Gathering logs for describe nodes ...
	I0912 22:16:06.591688  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0912 22:16:06.662313  187890 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0912 22:16:06.662336  187890 logs.go:123] Gathering logs for kube-apiserver [dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f] ...
	I0912 22:16:06.662350  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dea9e0c9e705e361677a96b6e8940faaf14da2351b11363031633f6e1d79648f"
	I0912 22:16:06.704300  187890 logs.go:123] Gathering logs for CRI-O ...
	I0912 22:16:06.704326  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0912 22:16:06.732963  187890 logs.go:123] Gathering logs for container status ...
	I0912 22:16:06.733039  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0912 22:16:06.784165  187890 logs.go:123] Gathering logs for kubelet ...
	I0912 22:16:06.784191  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0912 22:16:06.907327  187890 logs.go:123] Gathering logs for dmesg ...
	I0912 22:16:06.907363  187890 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	* 
	* ==> CRI-O <==
	* Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.234824416Z" level=info msg="Creating container: kube-system/coredns-5dd5756b68-mtzsr/coredns" id=c5e4a440-8fe8-496c-b1df-d94a175dda90 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.234911964Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.265057339Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8f705fed230a5d630545a209b5fed4d1a74d4b653e65ab038984b57ff8314c92/merged/etc/passwd: no such file or directory"
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.265100816Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8f705fed230a5d630545a209b5fed4d1a74d4b653e65ab038984b57ff8314c92/merged/etc/group: no such file or directory"
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.378279992Z" level=info msg="Created container 1603f26d864b37e7593a85e896e949cf7e3c460aa58f7f8265a032e823128b66: kube-system/kindnet-km9nv/kindnet-cni" id=83ebfabf-2960-46bd-8ce0-f45e7563ca99 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.421314532Z" level=info msg="Starting container: 1603f26d864b37e7593a85e896e949cf7e3c460aa58f7f8265a032e823128b66" id=c223ef7e-aa98-4729-bf8d-bf69bf96eea7 name=/runtime.v1.RuntimeService/StartContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.434514914Z" level=info msg="Started container" PID=4427 containerID=1603f26d864b37e7593a85e896e949cf7e3c460aa58f7f8265a032e823128b66 description=kube-system/kindnet-km9nv/kindnet-cni id=c223ef7e-aa98-4729-bf8d-bf69bf96eea7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e8cefea9c539b6bbbefc269d85e1ae250055aed9dd7af112802e0b2983fa6bd
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.440021625Z" level=info msg="Created container b7b3942c5e9838db19e1a8fc9c458b77dc569416b66a8d45caf1aff926b1effa: kube-system/coredns-5dd5756b68-mtzsr/coredns" id=c5e4a440-8fe8-496c-b1df-d94a175dda90 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.440769635Z" level=info msg="Starting container: b7b3942c5e9838db19e1a8fc9c458b77dc569416b66a8d45caf1aff926b1effa" id=2e9e47a1-4e65-4076-8f35-ca70b1ccda09 name=/runtime.v1.RuntimeService/StartContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.451146412Z" level=info msg="Started container" PID=4435 containerID=b7b3942c5e9838db19e1a8fc9c458b77dc569416b66a8d45caf1aff926b1effa description=kube-system/coredns-5dd5756b68-mtzsr/coredns id=2e9e47a1-4e65-4076-8f35-ca70b1ccda09 name=/runtime.v1.RuntimeService/StartContainer sandboxID=41ecaa7d266ffa580bd52eec24e048623e97a1bece2d119dc2ef194abaa56238
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.456799817Z" level=info msg="Created container 897b8d78ec7238c97cc1b8b196d8958795e446780ba2d1dfdb8c6f59509c68ca: kube-system/kube-proxy-z2hh7/kube-proxy" id=128fbe78-2696-46f3-aeaf-51265830b05a name=/runtime.v1.RuntimeService/CreateContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.457567587Z" level=info msg="Starting container: 897b8d78ec7238c97cc1b8b196d8958795e446780ba2d1dfdb8c6f59509c68ca" id=fb0b3553-8051-4a4b-8d52-74b1dd345233 name=/runtime.v1.RuntimeService/StartContainer
	Sep 12 22:15:48 pause-959901 crio[3113]: time="2023-09-12 22:15:48.527340188Z" level=info msg="Started container" PID=4451 containerID=897b8d78ec7238c97cc1b8b196d8958795e446780ba2d1dfdb8c6f59509c68ca description=kube-system/kube-proxy-z2hh7/kube-proxy id=fb0b3553-8051-4a4b-8d52-74b1dd345233 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3c32f7ab3305aec24628e3a50810c4f9d3a77f0be5bb3e1c53453b1e8a1a550
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.020931661Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.025726800Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.025761986Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.025780048Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.029395747Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.029419462Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.029431346Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.032467811Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.032490070Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.032503438Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.035766900Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 12 22:15:49 pause-959901 crio[3113]: time="2023-09-12 22:15:49.035786011Z" level=info msg="Updated default CNI network name to kindnet"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b7b3942c5e983       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   21 seconds ago      Running             coredns                   2                   41ecaa7d266ff       coredns-5dd5756b68-mtzsr
	897b8d78ec723       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   21 seconds ago      Running             kube-proxy                3                   c3c32f7ab3305       kube-proxy-z2hh7
	1603f26d864b3       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc   21 seconds ago      Running             kindnet-cni               3                   3e8cefea9c539       kindnet-km9nv
	4c5ca908c2b82       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   26 seconds ago      Running             kube-controller-manager   3                   3f64d56d033b3       kube-controller-manager-pause-959901
	21c414acf582f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   26 seconds ago      Running             etcd                      3                   21f89d0b5ddd2       etcd-pause-959901
	2c734cf8137d9       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   26 seconds ago      Running             kube-scheduler            3                   e95e4f41c7e26       kube-scheduler-pause-959901
	d3642e75c030e       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   27 seconds ago      Running             kube-apiserver            2                   1dced900bc3c4       kube-apiserver-pause-959901
	c48215f38677a       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   31 seconds ago      Exited              kube-scheduler            2                   e95e4f41c7e26       kube-scheduler-pause-959901
	547dc8b525719       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   38 seconds ago      Exited              kube-proxy                2                   c3c32f7ab3305       kube-proxy-z2hh7
	bb8b7ab1358b0       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc   38 seconds ago      Exited              kindnet-cni               2                   3e8cefea9c539       kindnet-km9nv
	8caf71e9a8547       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   38 seconds ago      Exited              kube-controller-manager   2                   3f64d56d033b3       kube-controller-manager-pause-959901
	3b24877fff317       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   38 seconds ago      Exited              etcd                      2                   21f89d0b5ddd2       etcd-pause-959901
	dda5a9b46878c       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   51 seconds ago      Exited              kube-apiserver            1                   1dced900bc3c4       kube-apiserver-pause-959901
	35a9cfcc69267       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   51 seconds ago      Exited              coredns                   1                   41ecaa7d266ff       coredns-5dd5756b68-mtzsr
	
	* 
	* ==> coredns [35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:51796 - 16835 "HINFO IN 609490615299251194.5076587521202352489. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012172396s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> coredns [b7b3942c5e9838db19e1a8fc9c458b77dc569416b66a8d45caf1aff926b1effa] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34959 - 20434 "HINFO IN 4640168976983627359.2460408328691745340. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.086687202s
	
	* 
	* ==> describe nodes <==
	* Name:               pause-959901
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-959901
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=45f04e6c33f17ea86560d581e35f03eca0c584e1
	                    minikube.k8s.io/name=pause-959901
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_12T22_14_50_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 12 Sep 2023 22:14:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-959901
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 12 Sep 2023 22:16:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 12 Sep 2023 22:15:46 +0000   Tue, 12 Sep 2023 22:14:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 12 Sep 2023 22:15:46 +0000   Tue, 12 Sep 2023 22:14:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 12 Sep 2023 22:15:46 +0000   Tue, 12 Sep 2023 22:14:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 12 Sep 2023 22:15:46 +0000   Tue, 12 Sep 2023 22:15:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-959901
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859432Ki
	  pods:               110
	System Info:
	  Machine ID:                 0c4baa6b94c94648824c3c90ad6b4915
	  System UUID:                34eec731-f746-4460-992e-1e0db2bf2d99
	  Boot ID:                    ba5f5c49-ab96-46a2-94a7-f55592fcb8c1
	  Kernel Version:             5.15.0-1041-gcp
	  OS Image:                   Ubuntu 22.04.3 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-mtzsr                100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     68s
	  kube-system                 etcd-pause-959901                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                 kindnet-km9nv                           100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      68s
	  kube-system                 kube-apiserver-pause-959901             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-controller-manager-pause-959901    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	  kube-system                 kube-proxy-z2hh7                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-pause-959901             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 67s                kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeHasSufficientMemory  87s (x8 over 87s)  kubelet          Node pause-959901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    87s (x8 over 87s)  kubelet          Node pause-959901 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     87s (x8 over 87s)  kubelet          Node pause-959901 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     81s                kubelet          Node pause-959901 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  81s                kubelet          Node pause-959901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s                kubelet          Node pause-959901 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 81s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           68s                node-controller  Node pause-959901 event: Registered Node pause-959901 in Controller
	  Normal  NodeReady                65s                kubelet          Node pause-959901 status is now: NodeReady
	  Normal  Starting                 28s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s (x8 over 28s)  kubelet          Node pause-959901 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s (x8 over 28s)  kubelet          Node pause-959901 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s (x8 over 28s)  kubelet          Node pause-959901 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           12s                node-controller  Node pause-959901 event: Registered Node pause-959901 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep12 21:54] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[ +32.764792] IPv4: martian source 10.244.0.5 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 6f c0 8a 48 09 56 64 73 98 ed fe 08 00
	[Sep12 22:03] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000008] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +1.027394] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000006] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +2.011799] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000007] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +4.095589] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000008] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +8.191199] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-dd1ba5635088
	[  +0.000005] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[Sep12 22:05] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000006] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +1.031483] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000007] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +2.019755] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000006] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +4.255579] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000006] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[  +8.187238] IPv4: martian source 10.96.0.1 from 10.244.0.2, on dev br-dd1ba5635088
	[  +0.000009] ll header: 00000000: 02 42 dc 7c 21 dd 02 42 c0 a8 3a 02 08 00
	[Sep12 22:12] process 'docker/tmp/qemu-check536253658/check' started with executable stack
	
	* 
	* ==> etcd [21c414acf582f220c38426233c74918a1def7720a72165202d1cc5a3b6931590] <==
	* {"level":"info","ts":"2023-09-12T22:15:43.430983Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-12T22:15:43.431249Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-12T22:15:43.431324Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-12T22:15:43.431342Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2023-09-12T22:15:43.431408Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2023-09-12T22:15:45.066306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:45.066352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:45.066393Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:45.066408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 4"}
	{"level":"info","ts":"2023-09-12T22:15:45.066414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 4"}
	{"level":"info","ts":"2023-09-12T22:15:45.066422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 4"}
	{"level":"info","ts":"2023-09-12T22:15:45.066429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 4"}
	{"level":"info","ts":"2023-09-12T22:15:45.06782Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:pause-959901 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T22:15:45.067858Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:15:45.067835Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:15:45.068014Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T22:15:45.068062Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T22:15:45.069327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2023-09-12T22:15:45.069435Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T22:15:56.674838Z","caller":"traceutil/trace.go:171","msg":"trace[1788256736] transaction","detail":"{read_only:false; response_revision:503; number_of_response:1; }","duration":"105.50975ms","start":"2023-09-12T22:15:56.569303Z","end":"2023-09-12T22:15:56.674813Z","steps":["trace[1788256736] 'process raft request'  (duration: 94.027077ms)","trace[1788256736] 'compare'  (duration: 11.299261ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T22:15:57.109917Z","caller":"traceutil/trace.go:171","msg":"trace[241787556] transaction","detail":"{read_only:false; response_revision:505; number_of_response:1; }","duration":"177.357169ms","start":"2023-09-12T22:15:56.932542Z","end":"2023-09-12T22:15:57.109899Z","steps":["trace[241787556] 'process raft request'  (duration: 116.672895ms)","trace[241787556] 'compare'  (duration: 60.592835ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T22:15:57.298223Z","caller":"traceutil/trace.go:171","msg":"trace[786601345] linearizableReadLoop","detail":"{readStateIndex:537; appliedIndex:536; }","duration":"108.445578ms","start":"2023-09-12T22:15:57.189761Z","end":"2023-09-12T22:15:57.298207Z","steps":["trace[786601345] 'read index received'  (duration: 49.354996ms)","trace[786601345] 'applied index is now lower than readState.Index'  (duration: 59.089877ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-12T22:15:57.298337Z","caller":"traceutil/trace.go:171","msg":"trace[1661722603] transaction","detail":"{read_only:false; response_revision:506; number_of_response:1; }","duration":"179.471387ms","start":"2023-09-12T22:15:57.118839Z","end":"2023-09-12T22:15:57.29831Z","steps":["trace[1661722603] 'process raft request'  (duration: 120.328252ms)","trace[1661722603] 'compare'  (duration: 58.928244ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-12T22:15:57.29842Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.664012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-959901\" ","response":"range_response_count:1 size:5458"}
	{"level":"info","ts":"2023-09-12T22:15:57.298478Z","caller":"traceutil/trace.go:171","msg":"trace[1626776588] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-959901; range_end:; response_count:1; response_revision:506; }","duration":"108.745835ms","start":"2023-09-12T22:15:57.189721Z","end":"2023-09-12T22:15:57.298467Z","steps":["trace[1626776588] 'agreement among raft nodes before linearized reading'  (duration: 108.57011ms)"],"step_count":1}
	
	* 
	* ==> etcd [3b24877fff317c769401f6e12bbbde35264392f46957545a2e6c00fca5d730b3] <==
	* {"level":"info","ts":"2023-09-12T22:15:32.156993Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-12T22:15:33.943128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-12T22:15:33.943177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-12T22:15:33.943212Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2023-09-12T22:15:33.943229Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:33.943235Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:33.943243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:33.94325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2023-09-12T22:15:33.94401Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:pause-959901 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-12T22:15:33.94404Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:15:33.944042Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-12T22:15:33.944238Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-12T22:15:33.944282Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-12T22:15:33.945252Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2023-09-12T22:15:33.945417Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-12T22:15:40.847166Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2023-09-12T22:15:40.847249Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-959901","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	{"level":"warn","ts":"2023-09-12T22:15:40.847336Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T22:15:40.84737Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T22:15:40.849057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2023-09-12T22:15:40.849105Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"info","ts":"2023-09-12T22:15:40.849164Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dfc97eb0aae75b33","current-leader-member-id":"dfc97eb0aae75b33"}
	{"level":"info","ts":"2023-09-12T22:15:40.851153Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2023-09-12T22:15:40.851247Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2023-09-12T22:15:40.851269Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-959901","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	
	* 
	* ==> kernel <==
	*  22:16:10 up  1:58,  0 users,  load average: 4.31, 3.71, 2.21
	Linux pause-959901 5.15.0-1041-gcp #49~20.04.1-Ubuntu SMP Tue Aug 29 06:49:34 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.3 LTS"
	
	* 
	* ==> kindnet [1603f26d864b37e7593a85e896e949cf7e3c460aa58f7f8265a032e823128b66] <==
	* I0912 22:15:48.527807       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0912 22:15:48.527886       1 main.go:107] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0912 22:15:48.528066       1 main.go:116] setting mtu 1500 for CNI 
	I0912 22:15:48.528094       1 main.go:146] kindnetd IP family: "ipv4"
	I0912 22:15:48.528117       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0912 22:15:48.928564       1 main.go:223] Handling node with IPs: map[192.168.94.2:{}]
	I0912 22:15:49.020671       1 main.go:227] handling current node
	I0912 22:15:59.035040       1 main.go:223] Handling node with IPs: map[192.168.94.2:{}]
	I0912 22:15:59.035107       1 main.go:227] handling current node
	I0912 22:16:09.047011       1 main.go:223] Handling node with IPs: map[192.168.94.2:{}]
	I0912 22:16:09.047126       1 main.go:227] handling current node
	
	* 
	* ==> kindnet [bb8b7ab1358b0c1c296fdf8d6498c75a101663022de35fc9032435d60ea67ac6] <==
	* I0912 22:15:32.029514       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0912 22:15:32.029582       1 main.go:107] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0912 22:15:32.029763       1 main.go:116] setting mtu 1500 for CNI 
	I0912 22:15:32.029790       1 main.go:146] kindnetd IP family: "ipv4"
	I0912 22:15:32.029815       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0912 22:15:32.347719       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0912 22:15:32.421470       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0912 22:15:33.422725       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0912 22:15:35.424316       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0912 22:15:38.425031       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> kube-apiserver [d3642e75c030ea3fe88aa1063dbe616912d613b99421d602b2ccccc303608f9b] <==
	* I0912 22:15:46.241609       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0912 22:15:46.242177       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0912 22:15:46.242192       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0912 22:15:46.242945       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 22:15:46.243032       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0912 22:15:46.342753       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0912 22:15:46.343334       1 aggregator.go:166] initial CRD sync complete...
	I0912 22:15:46.343405       1 autoregister_controller.go:141] Starting autoregister controller
	I0912 22:15:46.343435       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0912 22:15:46.438149       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0912 22:15:46.438190       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0912 22:15:46.438214       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0912 22:15:46.439331       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0912 22:15:46.440320       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0912 22:15:46.441468       1 shared_informer.go:318] Caches are synced for configmaps
	I0912 22:15:46.441700       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0912 22:15:46.443700       1 cache.go:39] Caches are synced for autoregister controller
	E0912 22:15:46.443803       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0912 22:15:46.524703       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0912 22:15:47.244395       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0912 22:15:48.157972       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0912 22:15:48.263495       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0912 22:15:48.278945       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0912 22:15:48.522139       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0912 22:15:48.532513       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-apiserver [dda5a9b46878cf098d40e5f1d9dfafd775f6a514257a061bd7524b6f2b154a4b] <==
	* }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:15:22.563845       1 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:15:23.246145       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0912 22:15:23.858198       1 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [4c5ca908c2b82e4e21818aaff746f200623a8890e73cfe552129eff8ac2c746c] <==
	* I0912 22:15:58.889395       1 shared_informer.go:318] Caches are synced for taint
	I0912 22:15:58.889578       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0912 22:15:58.889560       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0912 22:15:58.889652       1 taint_manager.go:211] "Sending events to api server"
	I0912 22:15:58.889709       1 event.go:307] "Event occurred" object="pause-959901" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-959901 event: Registered Node pause-959901 in Controller"
	I0912 22:15:58.889789       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-959901"
	I0912 22:15:58.889861       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0912 22:15:58.916778       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 22:15:58.921910       1 shared_informer.go:318] Caches are synced for daemon sets
	I0912 22:15:58.926572       1 shared_informer.go:318] Caches are synced for GC
	I0912 22:15:58.928727       1 shared_informer.go:318] Caches are synced for stateful set
	I0912 22:15:58.933030       1 shared_informer.go:318] Caches are synced for PVC protection
	I0912 22:15:58.935289       1 shared_informer.go:318] Caches are synced for persistent volume
	I0912 22:15:58.940704       1 shared_informer.go:318] Caches are synced for HPA
	I0912 22:15:58.940738       1 shared_informer.go:318] Caches are synced for attach detach
	I0912 22:15:58.947976       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0912 22:15:58.950344       1 shared_informer.go:318] Caches are synced for job
	I0912 22:15:58.961700       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0912 22:15:58.961818       1 shared_informer.go:318] Caches are synced for endpoint
	I0912 22:15:58.962480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.659µs"
	I0912 22:15:58.963678       1 shared_informer.go:318] Caches are synced for resource quota
	I0912 22:15:59.003624       1 shared_informer.go:318] Caches are synced for disruption
	I0912 22:15:59.338686       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 22:15:59.395777       1 shared_informer.go:318] Caches are synced for garbage collector
	I0912 22:15:59.395813       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-controller-manager [8caf71e9a85470072c52a58288dd4b14ca4aa7ba679faa64ae9099b4d063ab7e] <==
	* I0912 22:15:32.842460       1 serving.go:348] Generated self-signed cert in-memory
	I0912 22:15:33.518684       1 controllermanager.go:189] "Starting" version="v1.28.1"
	I0912 22:15:33.518711       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:15:33.519892       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0912 22:15:33.519943       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0912 22:15:33.520708       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I0912 22:15:33.520811       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-proxy [547dc8b5257194c7183b06b0608a04605dd9a50f31093513e51cffc696212e83] <==
	* I0912 22:15:32.190149       1 server_others.go:69] "Using iptables proxy"
	E0912 22:15:32.221576       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-959901": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:33.410754       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-959901": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:35.565333       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-959901": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.337154       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-959901": dial tcp 192.168.94.2:8443: connect: connection refused
	
	* 
	* ==> kube-proxy [897b8d78ec7238c97cc1b8b196d8958795e446780ba2d1dfdb8c6f59509c68ca] <==
	* I0912 22:15:48.631303       1 server_others.go:69] "Using iptables proxy"
	I0912 22:15:48.646830       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I0912 22:15:48.674740       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0912 22:15:48.676941       1 server_others.go:152] "Using iptables Proxier"
	I0912 22:15:48.676976       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0912 22:15:48.676986       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0912 22:15:48.677024       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0912 22:15:48.677224       1 server.go:846] "Version info" version="v1.28.1"
	I0912 22:15:48.677243       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0912 22:15:48.678043       1 config.go:188] "Starting service config controller"
	I0912 22:15:48.678070       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0912 22:15:48.678093       1 config.go:97] "Starting endpoint slice config controller"
	I0912 22:15:48.678098       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0912 22:15:48.678586       1 config.go:315] "Starting node config controller"
	I0912 22:15:48.678599       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0912 22:15:48.778227       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0912 22:15:48.778233       1 shared_informer.go:318] Caches are synced for service config
	I0912 22:15:48.778665       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [2c734cf8137d954a6bc98afaf7aa836fa45edde7af6f3364cd7ef7b889371894] <==
	* I0912 22:15:46.348359       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0912 22:15:46.423857       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0912 22:15:46.423898       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0912 22:15:46.423922       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0912 22:15:46.433577       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.433630       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.434018       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	E0912 22:15:46.434048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found]
	W0912 22:15:46.434184       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0912 22:15:46.434213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434229       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0912 22:15:46.434308       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.434399       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	E0912 22:15:46.434444       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0912 22:15:46.434559       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.434718       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.434769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.435200       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.435228       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W0912 22:15:46.435241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	E0912 22:15:46.435250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	I0912 22:15:47.624948       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [c48215f38677afd9eb1eac9c278231055b2104f5efa40af0df3b07dde9952f9e] <==
	* W0912 22:15:39.137663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.137755       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.94.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:39.137767       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.137812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:39.137808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.137861       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:39.954991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.955039       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.94.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:39.995528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:39.995562       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.94.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.161164       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.161206       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.94.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.223768       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.223807       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.94.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.286791       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.286833       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.286850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.94.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.287028       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.94.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.294369       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.294433       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.94.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	W0912 22:15:40.322643       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.322705       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.94.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.94.2:8443: connect: connection refused
	E0912 22:15:40.330387       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0912 22:15:40.330650       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0912 22:15:40.330725       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* Sep 12 22:15:46 pause-959901 kubelet[4082]: E0912 22:15:46.431959    4082 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:pause-959901" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'pause-959901' and this object
	Sep 12 22:15:46 pause-959901 kubelet[4082]: RBAC: [clusterrole.rbac.authorization.k8s.io "system:certificates.k8s.io:certificatesigningrequests:selfnodeclient" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.432247    4082 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.446031    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d59bdd92-bd6e-408a-a28a-dbd1255077a8-lib-modules\") pod \"kindnet-km9nv\" (UID: \"d59bdd92-bd6e-408a-a28a-dbd1255077a8\") " pod="kube-system/kindnet-km9nv"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.446085    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a0e46a6-3795-4959-8b48-576a02252969-xtables-lock\") pod \"kube-proxy-z2hh7\" (UID: \"9a0e46a6-3795-4959-8b48-576a02252969\") " pod="kube-system/kube-proxy-z2hh7"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.446115    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a0e46a6-3795-4959-8b48-576a02252969-lib-modules\") pod \"kube-proxy-z2hh7\" (UID: \"9a0e46a6-3795-4959-8b48-576a02252969\") " pod="kube-system/kube-proxy-z2hh7"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.521197    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d59bdd92-bd6e-408a-a28a-dbd1255077a8-cni-cfg\") pod \"kindnet-km9nv\" (UID: \"d59bdd92-bd6e-408a-a28a-dbd1255077a8\") " pod="kube-system/kindnet-km9nv"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.521263    4082 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d59bdd92-bd6e-408a-a28a-dbd1255077a8-xtables-lock\") pod \"kindnet-km9nv\" (UID: \"d59bdd92-bd6e-408a-a28a-dbd1255077a8\") " pod="kube-system/kindnet-km9nv"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.533287    4082 kubelet_node_status.go:108] "Node was previously registered" node="pause-959901"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.533401    4082 kubelet_node_status.go:73] "Successfully registered node" node="pause-959901"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.534753    4082 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 12 22:15:46 pause-959901 kubelet[4082]: I0912 22:15:46.535737    4082 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538082    4082 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538137    4082 projected.go:198] Error preparing data for projected volume kube-api-access-t48bd for pod kube-system/kindnet-km9nv: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538084    4082 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538220    4082 projected.go:198] Error preparing data for projected volume kube-api-access-dvggc for pod kube-system/coredns-5dd5756b68-mtzsr: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538229    4082 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d59bdd92-bd6e-408a-a28a-dbd1255077a8-kube-api-access-t48bd podName:d59bdd92-bd6e-408a-a28a-dbd1255077a8 nodeName:}" failed. No retries permitted until 2023-09-12 22:15:48.038204 +0000 UTC m=+5.715042289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-t48bd" (UniqueName: "kubernetes.io/projected/d59bdd92-bd6e-408a-a28a-dbd1255077a8-kube-api-access-t48bd") pod "kindnet-km9nv" (UID: "d59bdd92-bd6e-408a-a28a-dbd1255077a8") : failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538082    4082 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538290    4082 projected.go:198] Error preparing data for projected volume kube-api-access-mtz4x for pod kube-system/kube-proxy-z2hh7: failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538296    4082 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebce215d-39b5-449a-9c8f-67054a18fabf-kube-api-access-dvggc podName:ebce215d-39b5-449a-9c8f-67054a18fabf nodeName:}" failed. No retries permitted until 2023-09-12 22:15:48.038272189 +0000 UTC m=+5.715110507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-dvggc" (UniqueName: "kubernetes.io/projected/ebce215d-39b5-449a-9c8f-67054a18fabf-kube-api-access-dvggc") pod "coredns-5dd5756b68-mtzsr" (UID: "ebce215d-39b5-449a-9c8f-67054a18fabf") : failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:47 pause-959901 kubelet[4082]: E0912 22:15:47.538340    4082 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9a0e46a6-3795-4959-8b48-576a02252969-kube-api-access-mtz4x podName:9a0e46a6-3795-4959-8b48-576a02252969 nodeName:}" failed. No retries permitted until 2023-09-12 22:15:48.038327455 +0000 UTC m=+5.715165757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mtz4x" (UniqueName: "kubernetes.io/projected/9a0e46a6-3795-4959-8b48-576a02252969-kube-api-access-mtz4x") pod "kube-proxy-z2hh7" (UID: "9a0e46a6-3795-4959-8b48-576a02252969") : failed to sync configmap cache: timed out waiting for the condition
	Sep 12 22:15:48 pause-959901 kubelet[4082]: I0912 22:15:48.230958    4082 scope.go:117] "RemoveContainer" containerID="bb8b7ab1358b0c1c296fdf8d6498c75a101663022de35fc9032435d60ea67ac6"
	Sep 12 22:15:48 pause-959901 kubelet[4082]: I0912 22:15:48.231581    4082 scope.go:117] "RemoveContainer" containerID="547dc8b5257194c7183b06b0608a04605dd9a50f31093513e51cffc696212e83"
	Sep 12 22:15:48 pause-959901 kubelet[4082]: I0912 22:15:48.231699    4082 scope.go:117] "RemoveContainer" containerID="35a9cfcc69267da33f549bbc20ebb7d4a07d8cb1d60c8daa98c2e0b1c02314a7"
	Sep 12 22:15:56 pause-959901 kubelet[4082]: I0912 22:15:56.477389    4082 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-959901 -n pause-959901
helpers_test.go:261: (dbg) Run:  kubectl --context pause-959901 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (59.91s)

                                                
                                    

Test pass (268/298)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.9
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.05
10 TestDownloadOnly/v1.28.1/json-events 4.74
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.05
16 TestDownloadOnly/DeleteAll 0.19
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.11
18 TestDownloadOnlyKic 1.22
19 TestBinaryMirror 0.69
20 TestOffline 58.79
22 TestAddons/Setup 108.86
24 TestAddons/parallel/Registry 15.43
26 TestAddons/parallel/InspektorGadget 10.76
27 TestAddons/parallel/MetricsServer 5.63
28 TestAddons/parallel/HelmTiller 9.2
30 TestAddons/parallel/CSI 72.43
31 TestAddons/parallel/Headlamp 12.06
32 TestAddons/parallel/CloudSpanner 5.5
35 TestAddons/serial/GCPAuth/Namespaces 0.11
36 TestAddons/StoppedEnableDisable 12.08
37 TestCertOptions 29.52
38 TestCertExpiration 224.96
40 TestForceSystemdFlag 38.75
41 TestForceSystemdEnv 37.83
43 TestKVMDriverInstallOrUpdate 1.44
47 TestErrorSpam/setup 23.58
48 TestErrorSpam/start 0.57
49 TestErrorSpam/status 0.82
50 TestErrorSpam/pause 1.43
51 TestErrorSpam/unpause 1.45
52 TestErrorSpam/stop 1.33
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 37.53
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 24.74
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.07
63 TestFunctional/serial/CacheCmd/cache/add_remote 2.66
64 TestFunctional/serial/CacheCmd/cache/add_local 0.74
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.26
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.58
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
70 TestFunctional/serial/MinikubeKubectlCmd 0.1
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
72 TestFunctional/serial/ExtraConfig 33.31
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 1.28
75 TestFunctional/serial/LogsFileCmd 1.29
76 TestFunctional/serial/InvalidService 3.89
78 TestFunctional/parallel/ConfigCmd 0.39
79 TestFunctional/parallel/DashboardCmd 7.31
80 TestFunctional/parallel/DryRun 0.33
81 TestFunctional/parallel/InternationalLanguage 0.17
82 TestFunctional/parallel/StatusCmd 0.91
86 TestFunctional/parallel/ServiceCmdConnect 12.74
87 TestFunctional/parallel/AddonsCmd 0.17
88 TestFunctional/parallel/PersistentVolumeClaim 25.12
90 TestFunctional/parallel/SSHCmd 0.66
91 TestFunctional/parallel/CpCmd 1.31
92 TestFunctional/parallel/MySQL 22.56
93 TestFunctional/parallel/FileSync 0.4
94 TestFunctional/parallel/CertSync 1.61
98 TestFunctional/parallel/NodeLabels 0.08
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
102 TestFunctional/parallel/License 0.17
104 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
105 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
107 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.35
108 TestFunctional/parallel/Version/short 0.05
109 TestFunctional/parallel/Version/components 0.64
110 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
111 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
112 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
113 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
114 TestFunctional/parallel/ImageCommands/ImageBuild 3.26
115 TestFunctional/parallel/ImageCommands/Setup 0.99
116 TestFunctional/parallel/MountCmd/any-port 8.33
117 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.07
118 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.1
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.71
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/MountCmd/specific-port 1.82
127 TestFunctional/parallel/MountCmd/VerifyCleanup 1.99
128 TestFunctional/parallel/ServiceCmd/DeployApp 7.17
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.73
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.08
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.77
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
134 TestFunctional/parallel/ProfileCmd/profile_list 0.35
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
136 TestFunctional/parallel/ServiceCmd/List 0.52
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
139 TestFunctional/parallel/ServiceCmd/Format 0.55
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
143 TestFunctional/parallel/ServiceCmd/URL 0.37
144 TestFunctional/delete_addon-resizer_images 0.07
145 TestFunctional/delete_my-image_image 0.01
146 TestFunctional/delete_minikube_cached_images 0.02
150 TestIngressAddonLegacy/StartLegacyK8sCluster 67.89
152 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.75
153 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
157 TestJSONOutput/start/Command 40.1
158 TestJSONOutput/start/Audit 0
160 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/pause/Command 0.66
164 TestJSONOutput/pause/Audit 0
166 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
169 TestJSONOutput/unpause/Command 0.57
170 TestJSONOutput/unpause/Audit 0
172 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
173 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
175 TestJSONOutput/stop/Command 5.8
176 TestJSONOutput/stop/Audit 0
178 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
179 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
180 TestErrorJSONOutput 0.18
182 TestKicCustomNetwork/create_custom_network 31.62
183 TestKicCustomNetwork/use_default_bridge_network 26.39
184 TestKicExistingNetwork 26.97
185 TestKicCustomSubnet 24.17
186 TestKicStaticIP 24.9
187 TestMainNoArgs 0.04
188 TestMinikubeProfile 49.1
191 TestMountStart/serial/StartWithMountFirst 7.98
192 TestMountStart/serial/VerifyMountFirst 0.24
193 TestMountStart/serial/StartWithMountSecond 8.01
194 TestMountStart/serial/VerifyMountSecond 0.24
195 TestMountStart/serial/DeleteFirst 1.59
196 TestMountStart/serial/VerifyMountPostDelete 0.24
197 TestMountStart/serial/Stop 1.17
198 TestMountStart/serial/RestartStopped 6.99
199 TestMountStart/serial/VerifyMountPostStop 0.23
202 TestMultiNode/serial/FreshStart2Nodes 67.99
203 TestMultiNode/serial/DeployApp2Nodes 3.48
205 TestMultiNode/serial/AddNode 19.99
206 TestMultiNode/serial/ProfileList 0.25
207 TestMultiNode/serial/CopyFile 8.72
208 TestMultiNode/serial/StopNode 2.07
209 TestMultiNode/serial/StartAfterStop 11.02
210 TestMultiNode/serial/RestartKeepsNodes 114.5
211 TestMultiNode/serial/DeleteNode 4.62
212 TestMultiNode/serial/StopMultiNode 23.79
213 TestMultiNode/serial/RestartMultiNode 77.05
214 TestMultiNode/serial/ValidateNameConflict 25.58
219 TestPreload 132.28
221 TestScheduledStopUnix 96.45
224 TestInsufficientStorage 13.21
227 TestKubernetesUpgrade 354.55
228 TestMissingContainerUpgrade 152.79
230 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
231 TestNoKubernetes/serial/StartWithK8s 36.11
232 TestNoKubernetes/serial/StartWithStopK8s 8.47
233 TestNoKubernetes/serial/Start 6.71
234 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
235 TestNoKubernetes/serial/ProfileList 1.41
236 TestNoKubernetes/serial/Stop 1.21
237 TestNoKubernetes/serial/StartNoArgs 8.16
238 TestStoppedBinaryUpgrade/Setup 0.55
240 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
248 TestNetworkPlugins/group/false 3.71
252 TestStoppedBinaryUpgrade/MinikubeLogs 0.47
261 TestPause/serial/Start 41.48
262 TestNetworkPlugins/group/auto/Start 42.05
264 TestNetworkPlugins/group/kindnet/Start 44.61
265 TestNetworkPlugins/group/auto/KubeletFlags 0.31
266 TestNetworkPlugins/group/auto/NetCatPod 12.32
267 TestNetworkPlugins/group/auto/DNS 0.19
268 TestNetworkPlugins/group/auto/Localhost 0.18
269 TestNetworkPlugins/group/auto/HairPin 0.15
270 TestNetworkPlugins/group/calico/Start 60.86
271 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
272 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
273 TestNetworkPlugins/group/kindnet/NetCatPod 10.31
274 TestNetworkPlugins/group/custom-flannel/Start 60.6
275 TestNetworkPlugins/group/kindnet/DNS 0.16
276 TestNetworkPlugins/group/kindnet/Localhost 0.17
277 TestNetworkPlugins/group/kindnet/HairPin 0.15
278 TestNetworkPlugins/group/enable-default-cni/Start 39.4
279 TestNetworkPlugins/group/calico/ControllerPod 5.02
280 TestNetworkPlugins/group/calico/KubeletFlags 0.28
281 TestNetworkPlugins/group/calico/NetCatPod 10.39
282 TestNetworkPlugins/group/calico/DNS 0.15
283 TestNetworkPlugins/group/calico/Localhost 0.14
284 TestNetworkPlugins/group/calico/HairPin 0.14
285 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
286 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.33
287 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
288 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
289 TestNetworkPlugins/group/custom-flannel/DNS 0.21
290 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
291 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
292 TestNetworkPlugins/group/flannel/Start 59.14
293 TestNetworkPlugins/group/enable-default-cni/DNS 33.02
294 TestNetworkPlugins/group/bridge/Start 37.04
295 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
296 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
298 TestStartStop/group/old-k8s-version/serial/FirstStart 125.06
299 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
300 TestNetworkPlugins/group/bridge/NetCatPod 10.28
302 TestStartStop/group/no-preload/serial/FirstStart 54.52
303 TestNetworkPlugins/group/flannel/ControllerPod 5.02
304 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
305 TestNetworkPlugins/group/flannel/NetCatPod 10.35
306 TestNetworkPlugins/group/bridge/DNS 32.94
307 TestNetworkPlugins/group/flannel/DNS 0.18
308 TestNetworkPlugins/group/flannel/Localhost 0.14
309 TestNetworkPlugins/group/flannel/HairPin 0.15
311 TestStartStop/group/embed-certs/serial/FirstStart 43.96
312 TestNetworkPlugins/group/bridge/Localhost 0.16
313 TestNetworkPlugins/group/bridge/HairPin 0.15
314 TestStartStop/group/no-preload/serial/DeployApp 8.41
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.78
317 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
318 TestStartStop/group/no-preload/serial/Stop 12.28
319 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
320 TestStartStop/group/no-preload/serial/SecondStart 334.19
321 TestStartStop/group/embed-certs/serial/DeployApp 8.38
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
323 TestStartStop/group/embed-certs/serial/Stop 12.18
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.34
325 TestStartStop/group/old-k8s-version/serial/DeployApp 8.41
326 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
327 TestStartStop/group/embed-certs/serial/SecondStart 336.46
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
330 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.76
331 TestStartStop/group/old-k8s-version/serial/Stop 11.91
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.16
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 343.03
334 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
335 TestStartStop/group/old-k8s-version/serial/SecondStart 417.25
336 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 12.02
337 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
338 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.34
339 TestStartStop/group/no-preload/serial/Pause 2.89
341 TestStartStop/group/newest-cni/serial/FirstStart 39
342 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.02
343 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
344 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
345 TestStartStop/group/embed-certs/serial/Pause 2.94
346 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.02
347 TestStartStop/group/newest-cni/serial/DeployApp 0
348 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.25
349 TestStartStop/group/newest-cni/serial/Stop 12.21
350 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
351 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
352 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.82
353 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
354 TestStartStop/group/newest-cni/serial/SecondStart 26.72
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
358 TestStartStop/group/newest-cni/serial/Pause 2.35
359 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
360 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
361 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
362 TestStartStop/group/old-k8s-version/serial/Pause 2.5
x
+
TestDownloadOnly/v1.16.0/json-events (6.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-358025 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-358025 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.900489231s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-358025
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-358025: exit status 85 (53.762399ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-358025 | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC |          |
	|         | -p download-only-358025        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 21:43:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:43:26.896760   22709 out.go:296] Setting OutFile to fd 1 ...
	I0912 21:43:26.896846   22709 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:43:26.896853   22709 out.go:309] Setting ErrFile to fd 2...
	I0912 21:43:26.896858   22709 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:43:26.897036   22709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	W0912 21:43:26.897154   22709 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17194-15878/.minikube/config/config.json: open /home/jenkins/minikube-integration/17194-15878/.minikube/config/config.json: no such file or directory
	I0912 21:43:26.897690   22709 out.go:303] Setting JSON to true
	I0912 21:43:26.898495   22709 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5155,"bootTime":1694549852,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:43:26.898551   22709 start.go:138] virtualization: kvm guest
	I0912 21:43:26.900556   22709 out.go:97] [download-only-358025] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:43:26.901955   22709 out.go:169] MINIKUBE_LOCATION=17194
	W0912 21:43:26.900677   22709 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball: no such file or directory
	I0912 21:43:26.900754   22709 notify.go:220] Checking for updates...
	I0912 21:43:26.904611   22709 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:43:26.905857   22709 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 21:43:26.907109   22709 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 21:43:26.908259   22709 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0912 21:43:26.910446   22709 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0912 21:43:26.910635   22709 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 21:43:26.932025   22709 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 21:43:26.932098   22709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:43:27.267820   22709 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-09-12 21:43:27.259144956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:43:27.267934   22709 docker.go:294] overlay module found
	I0912 21:43:27.269467   22709 out.go:97] Using the docker driver based on user configuration
	I0912 21:43:27.269487   22709 start.go:298] selected driver: docker
	I0912 21:43:27.269492   22709 start.go:902] validating driver "docker" against <nil>
	I0912 21:43:27.269566   22709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:43:27.329733   22709 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:43 SystemTime:2023-09-12 21:43:27.321980272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:43:27.329890   22709 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0912 21:43:27.330333   22709 start_flags.go:384] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0912 21:43:27.330478   22709 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0912 21:43:27.332105   22709 out.go:169] Using Docker driver with root privileges
	I0912 21:43:27.333188   22709 cni.go:84] Creating CNI manager for ""
	I0912 21:43:27.333206   22709 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0912 21:43:27.333213   22709 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0912 21:43:27.333223   22709 start_flags.go:321] config:
	{Name:download-only-358025 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-358025 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 21:43:27.334549   22709 out.go:97] Starting control plane node download-only-358025 in cluster download-only-358025
	I0912 21:43:27.334561   22709 cache.go:122] Beginning downloading kic base image for docker with crio
	I0912 21:43:27.335607   22709 out.go:97] Pulling base image ...
	I0912 21:43:27.335625   22709 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0912 21:43:27.335728   22709 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local docker daemon
	I0912 21:43:27.349518   22709 cache.go:150] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 to local cache
	I0912 21:43:27.349667   22709 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 in local cache directory
	I0912 21:43:27.349747   22709 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 to local cache
	I0912 21:43:27.355834   22709 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0912 21:43:27.355855   22709 cache.go:57] Caching tarball of preloaded images
	I0912 21:43:27.355943   22709 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0912 21:43:27.357471   22709 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0912 21:43:27.357487   22709 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:43:27.393357   22709 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0912 21:43:29.734408   22709 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:43:29.734487   22709 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17194-15878/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0912 21:43:30.668730   22709 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on crio
	I0912 21:43:30.669037   22709 profile.go:148] Saving config to /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/download-only-358025/config.json ...
	I0912 21:43:30.669064   22709 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/download-only-358025/config.json: {Name:mk7dc6bc859dc96e835c3b74a1336f0233bbdc9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0912 21:43:30.669239   22709 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0912 21:43:30.669426   22709 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17194-15878/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-358025"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (4.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-358025 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-358025 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.734704735s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (4.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-358025
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-358025: exit status 85 (52.464412ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-358025 | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC |          |
	|         | -p download-only-358025        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-358025 | jenkins | v1.31.2 | 12 Sep 23 21:43 UTC |          |
	|         | -p download-only-358025        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/12 21:43:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0912 21:43:33.854440   22851 out.go:296] Setting OutFile to fd 1 ...
	I0912 21:43:33.854677   22851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:43:33.854687   22851 out.go:309] Setting ErrFile to fd 2...
	I0912 21:43:33.854695   22851 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:43:33.854911   22851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	W0912 21:43:33.855031   22851 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17194-15878/.minikube/config/config.json: open /home/jenkins/minikube-integration/17194-15878/.minikube/config/config.json: no such file or directory
	I0912 21:43:33.855454   22851 out.go:303] Setting JSON to true
	I0912 21:43:33.856225   22851 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5162,"bootTime":1694549852,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:43:33.856310   22851 start.go:138] virtualization: kvm guest
	I0912 21:43:33.857929   22851 out.go:97] [download-only-358025] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:43:33.859279   22851 out.go:169] MINIKUBE_LOCATION=17194
	I0912 21:43:33.858069   22851 notify.go:220] Checking for updates...
	I0912 21:43:33.861635   22851 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:43:33.862999   22851 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 21:43:33.864316   22851 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 21:43:33.865617   22851 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-358025"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-358025
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.22s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-828900 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-828900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-828900
--- PASS: TestDownloadOnlyKic (1.22s)

                                                
                                    
x
+
TestBinaryMirror (0.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-484613 --alsologtostderr --binary-mirror http://127.0.0.1:45871 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-484613" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-484613
--- PASS: TestBinaryMirror (0.69s)

                                                
                                    
x
+
TestOffline (58.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-761737 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-761737 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (56.320037622s)
helpers_test.go:175: Cleaning up "offline-crio-761737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-761737
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-761737: (2.468816338s)
--- PASS: TestOffline (58.79s)

                                                
                                    
x
+
TestAddons/Setup (108.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-348433 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-348433 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m48.863725701s)
--- PASS: TestAddons/Setup (108.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 29.131599ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-99mmd" [bd420210-8e2d-41d5-8549-97497ba31036] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.015715577s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-f8tqr" [9f8000c3-3670-4964-94d7-368bd159e70f] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013093994s
addons_test.go:316: (dbg) Run:  kubectl --context addons-348433 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-348433 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-348433 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.618170818s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 ip
2023/09/12 21:45:44 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vrdzs" [5288ca0e-7ddd-4288-a238-b622eba19f4e] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.009336909s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-348433
addons_test.go:817: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-348433: (5.746088704s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 2.942699ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-xc9jw" [8592098a-a0be-4d41-b6c5-cfe27b75aa31] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011062104s
addons_test.go:391: (dbg) Run:  kubectl --context addons-348433 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.2s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 28.450689ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-pqpkv" [2cf5d2c4-bcda-4708-887e-1364874e0576] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.021650637s
addons_test.go:449: (dbg) Run:  kubectl --context addons-348433 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-348433 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (3.660809601s)
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (72.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 12.693272ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-348433 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-348433 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [eeb3485e-8914-4676-a5ab-e56b36e90bf9] Pending
helpers_test.go:344: "task-pv-pod" [eeb3485e-8914-4676-a5ab-e56b36e90bf9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [eeb3485e-8914-4676-a5ab-e56b36e90bf9] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.054210334s
addons_test.go:560: (dbg) Run:  kubectl --context addons-348433 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-348433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-348433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-348433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-348433 delete pod task-pv-pod
addons_test.go:570: (dbg) Done: kubectl --context addons-348433 delete pod task-pv-pod: (1.044403419s)
addons_test.go:576: (dbg) Run:  kubectl --context addons-348433 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-348433 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-348433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-348433 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a7559845-f28f-43a6-86f9-36c905969966] Pending
helpers_test.go:344: "task-pv-pod-restore" [a7559845-f28f-43a6-86f9-36c905969966] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a7559845-f28f-43a6-86f9-36c905969966] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.010126475s
addons_test.go:602: (dbg) Run:  kubectl --context addons-348433 delete pod task-pv-pod-restore
addons_test.go:606: (dbg) Run:  kubectl --context addons-348433 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-348433 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-348433 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.551435177s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-348433 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (72.43s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-348433 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-348433 --alsologtostderr -v=1: (1.050035952s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-crbr7" [f57ae560-7227-417e-aafb-d3b56c95f698] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-crbr7" [f57ae560-7227-417e-aafb-d3b56c95f698] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-crbr7" [f57ae560-7227-417e-aafb-d3b56c95f698] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.007903858s
--- PASS: TestAddons/parallel/Headlamp (12.06s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-hcl4t" [7bcedddc-daea-485c-b949-6b8ddabfb627] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.033895096s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-348433
--- PASS: TestAddons/parallel/CloudSpanner (5.50s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-348433 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-348433 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.08s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-348433
addons_test.go:148: (dbg) Done: out/minikube-linux-amd64 stop -p addons-348433: (11.865176077s)
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-348433
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-348433
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-348433
--- PASS: TestAddons/StoppedEnableDisable (12.08s)

                                                
                                    
x
+
TestCertOptions (29.52s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-272741 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-272741 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (26.545017339s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-272741 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-272741 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-272741 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-272741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-272741
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-272741: (2.189883621s)
--- PASS: TestCertOptions (29.52s)

                                                
                                    
x
+
TestCertExpiration (224.96s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-347810 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-347810 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.395928506s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-347810 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-347810 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.331949104s)
helpers_test.go:175: Cleaning up "cert-expiration-347810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-347810
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-347810: (2.226349282s)
--- PASS: TestCertExpiration (224.96s)

                                                
                                    
x
+
TestForceSystemdFlag (38.75s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-983057 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-983057 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.134778701s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-983057 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-983057" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-983057
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-983057: (2.33907143s)
--- PASS: TestForceSystemdFlag (38.75s)

                                                
                                    
x
+
TestForceSystemdEnv (37.83s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-824000 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-824000 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.349368636s)
helpers_test.go:175: Cleaning up "force-systemd-env-824000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-824000
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-824000: (2.478871785s)
--- PASS: TestForceSystemdEnv (37.83s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.44s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.44s)

                                                
                                    
x
+
TestErrorSpam/setup (23.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-913597 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-913597 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-913597 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-913597 --driver=docker  --container-runtime=crio: (23.580562261s)
--- PASS: TestErrorSpam/setup (23.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.57s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 start --dry-run
--- PASS: TestErrorSpam/start (0.57s)

                                                
                                    
x
+
TestErrorSpam/status (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 status
--- PASS: TestErrorSpam/status (0.82s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 unpause
--- PASS: TestErrorSpam/unpause (1.45s)

                                                
                                    
x
+
TestErrorSpam/stop (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 stop: (1.17889823s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-913597 --log_dir /tmp/nospam-913597 stop
--- PASS: TestErrorSpam/stop (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17194-15878/.minikube/files/etc/test/nested/copy/22698/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.53s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728577 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-728577 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.529937886s)
--- PASS: TestFunctional/serial/StartWithProxy (37.53s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (24.74s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728577 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-728577 --alsologtostderr -v=8: (24.741280628s)
functional_test.go:659: soft start took 24.742066878s for "functional-728577" cluster.
--- PASS: TestFunctional/serial/SoftStart (24.74s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-728577 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-728577 /tmp/TestFunctionalserialCacheCmdcacheadd_local2155780109/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 cache add minikube-local-cache-test:functional-728577
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 cache delete minikube-local-cache-test:functional-728577
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-728577
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (260.690717ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.58s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 kubectl -- --context functional-728577 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-728577 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.31s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728577 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0912 21:50:29.801495   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:29.807099   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:29.817321   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:29.837585   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:29.877923   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:29.958258   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:30.118693   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:30.439330   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:31.080228   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:32.360706   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:34.921826   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:40.042190   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 21:50:50.282378   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-728577 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.314389526s)
functional_test.go:757: restart took 33.314516792s for "functional-728577" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.31s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-728577 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-728577 logs: (1.277321308s)
--- PASS: TestFunctional/serial/LogsCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 logs --file /tmp/TestFunctionalserialLogsFileCmd4289231088/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-728577 logs --file /tmp/TestFunctionalserialLogsFileCmd4289231088/001/logs.txt: (1.286070178s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-728577 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-728577
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-728577: exit status 115 (309.836758ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30983 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-728577 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 config get cpus: exit status 14 (115.031259ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 config get cpus: exit status 14 (54.615983ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728577 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-728577 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 57938: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728577 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728577 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (137.900574ms)

                                                
                                                
-- stdout --
	* [functional-728577] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17194
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:51:25.578122   56555 out.go:296] Setting OutFile to fd 1 ...
	I0912 21:51:25.578239   56555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:51:25.578249   56555 out.go:309] Setting ErrFile to fd 2...
	I0912 21:51:25.578256   56555 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:51:25.578457   56555 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 21:51:25.578995   56555 out.go:303] Setting JSON to false
	I0912 21:51:25.579982   56555 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5634,"bootTime":1694549852,"procs":307,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:51:25.580045   56555 start.go:138] virtualization: kvm guest
	I0912 21:51:25.582263   56555 out.go:177] * [functional-728577] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 21:51:25.584120   56555 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 21:51:25.585350   56555 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:51:25.584171   56555 notify.go:220] Checking for updates...
	I0912 21:51:25.587770   56555 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 21:51:25.589061   56555 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 21:51:25.590331   56555 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:51:25.591602   56555 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:51:25.593186   56555 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 21:51:25.593618   56555 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 21:51:25.616379   56555 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 21:51:25.616482   56555 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:51:25.667596   56555 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-09-12 21:51:25.658450411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:51:25.668827   56555 docker.go:294] overlay module found
	I0912 21:51:25.670658   56555 out.go:177] * Using the docker driver based on existing profile
	I0912 21:51:25.671955   56555 start.go:298] selected driver: docker
	I0912 21:51:25.671969   56555 start.go:902] validating driver "docker" against &{Name:functional-728577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-728577 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 21:51:25.672105   56555 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:51:25.674148   56555 out.go:177] 
	W0912 21:51:25.675430   56555 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0912 21:51:25.676702   56555 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728577 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-728577 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-728577 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (174.392334ms)

                                                
                                                
-- stdout --
	* [functional-728577] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17194
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 21:51:26.816921   57142 out.go:296] Setting OutFile to fd 1 ...
	I0912 21:51:26.817030   57142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:51:26.817040   57142 out.go:309] Setting ErrFile to fd 2...
	I0912 21:51:26.817044   57142 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 21:51:26.817317   57142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 21:51:26.817848   57142 out.go:303] Setting JSON to false
	I0912 21:51:26.825631   57142 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5635,"bootTime":1694549852,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 21:51:26.825724   57142 start.go:138] virtualization: kvm guest
	I0912 21:51:26.828255   57142 out.go:177] * [functional-728577] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0912 21:51:26.829825   57142 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 21:51:26.829880   57142 notify.go:220] Checking for updates...
	I0912 21:51:26.831265   57142 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 21:51:26.832573   57142 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 21:51:26.833979   57142 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 21:51:26.835335   57142 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 21:51:26.836551   57142 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 21:51:26.838433   57142 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 21:51:26.839056   57142 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 21:51:26.874506   57142 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 21:51:26.874604   57142 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 21:51:26.936648   57142 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:47 SystemTime:2023-09-12 21:51:26.928444776 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 21:51:26.936782   57142 docker.go:294] overlay module found
	I0912 21:51:26.939867   57142 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0912 21:51:26.941301   57142 start.go:298] selected driver: docker
	I0912 21:51:26.941316   57142 start.go:902] validating driver "docker" against &{Name:functional-728577 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1694457807-17194@sha256:a43492789075efb9a6b2ea51ab0c60354400324130ed0bb27d969c2fba2f2402 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:functional-728577 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0912 21:51:26.941395   57142 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 21:51:26.943569   57142 out.go:177] 
	W0912 21:51:26.944984   57142 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0912 21:51:26.946444   57142 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-728577 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-728577 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-bccsd" [6bb434b2-bac2-408e-95ee-df5147ef28af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-bccsd" [6bb434b2-bac2-408e-95ee-df5147ef28af] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.014453299s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.49.2:30226
functional_test.go:1674: http://192.168.49.2:30226: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-bccsd

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30226
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [733cdb88-d27a-4746-9b4d-290dc9bdf8a8] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.013510153s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-728577 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-728577 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-728577 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-728577 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8471f851-8849-4db8-9662-dcd3a3ec28e8] Pending
helpers_test.go:344: "sp-pod" [8471f851-8849-4db8-9662-dcd3a3ec28e8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8471f851-8849-4db8-9662-dcd3a3ec28e8] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.008693847s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-728577 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-728577 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-728577 delete -f testdata/storage-provisioner/pod.yaml: (1.120800341s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-728577 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e454c931-348e-4653-a444-c19006abee79] Pending
helpers_test.go:344: "sp-pod" [e454c931-348e-4653-a444-c19006abee79] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.010180939s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-728577 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh -n functional-728577 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 cp functional-728577:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2813034915/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh -n functional-728577 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-728577 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-gpzrg" [3e654ef4-1027-4fec-9197-daf01c5f858c] Pending
helpers_test.go:344: "mysql-859648c796-gpzrg" [3e654ef4-1027-4fec-9197-daf01c5f858c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-gpzrg" [3e654ef4-1027-4fec-9197-daf01c5f858c] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.010454964s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-728577 exec mysql-859648c796-gpzrg -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-728577 exec mysql-859648c796-gpzrg -- mysql -ppassword -e "show databases;": exit status 1 (134.772505ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-728577 exec mysql-859648c796-gpzrg -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (22.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/22698/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo cat /etc/test/nested/copy/22698/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/22698.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo cat /etc/ssl/certs/22698.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/22698.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo cat /usr/share/ca-certificates/22698.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/226982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo cat /etc/ssl/certs/226982.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/226982.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo cat /usr/share/ca-certificates/226982.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-728577 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 ssh "sudo systemctl is-active docker": exit status 1 (317.104406ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 ssh "sudo systemctl is-active containerd": exit status 1 (272.870156ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728577 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728577 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-728577 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 52058: os: process already finished
helpers_test.go:502: unable to terminate pid 51706: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-728577 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-728577 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-728577 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [cc4e9677-4321-4397-9a14-1ab921208ef1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [cc4e9677-4321-4397-9a14-1ab921208ef1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.029086493s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728577 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-728577
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20230809-80a64d96
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728577 image ls --format short --alsologtostderr:
I0912 21:51:30.685496   58750 out.go:296] Setting OutFile to fd 1 ...
I0912 21:51:30.685948   58750 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:30.686001   58750 out.go:309] Setting ErrFile to fd 2...
I0912 21:51:30.686019   58750 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:30.686477   58750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
I0912 21:51:30.687820   58750 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:30.687995   58750 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:30.688761   58750 cli_runner.go:164] Run: docker container inspect functional-728577 --format={{.State.Status}}
I0912 21:51:30.706199   58750 ssh_runner.go:195] Run: systemctl --version
I0912 21:51:30.706239   58750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728577
I0912 21:51:30.722862   58750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/functional-728577/id_rsa Username:docker}
I0912 21:51:30.824876   58750 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728577 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.1            | 821b3dfea27be | 123MB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/nginx                 | latest             | f5a6b296b8a29 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-728577  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-proxy              | v1.28.1            | 6cdbabde3874e | 74.7MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| docker.io/library/nginx                 | alpine             | 433dbc17191a7 | 44.4MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-apiserver          | v1.28.1            | 5c801295c21d0 | 127MB  |
| registry.k8s.io/kube-scheduler          | v1.28.1            | b462ce0c8b1ff | 61.5MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728577 image ls --format table --alsologtostderr:
I0912 21:51:32.685571   59147 out.go:296] Setting OutFile to fd 1 ...
I0912 21:51:32.685874   59147 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:32.685888   59147 out.go:309] Setting ErrFile to fd 2...
I0912 21:51:32.685895   59147 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:32.686197   59147 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
I0912 21:51:32.686833   59147 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:32.686976   59147 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:32.687532   59147 cli_runner.go:164] Run: docker container inspect functional-728577 --format={{.State.Status}}
I0912 21:51:32.704979   59147 ssh_runner.go:195] Run: systemctl --version
I0912 21:51:32.705038   59147 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728577
I0912 21:51:32.721091   59147 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/functional-728577/id_rsa Username:docker}
I0912 21:51:32.824350   59147 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728577 image ls --format json --alsologtostderr:
[{"id":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":["registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"126972880"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184e
c9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629","repoDigests":["docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70","docker.io/library/nginx@sha256:7ba6006df2033690d8c64bd8df69e4a1957b78e57b4e32141c78d72a5e0de63d"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44389673"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[
"gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":["docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153","docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820093"},{"id":"ffd4cfbbe753e6241
9e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-728577"],"size":"34114467"},{"id":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":["registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3","registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"74680215"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb
0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha25
6:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"123163446"},{"id":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":["registry.k8s.io/kube-scheduler@sha2
56:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4","registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"61477686"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728577 image ls --format json --alsologtostderr:
I0912 21:51:32.478764   59067 out.go:296] Setting OutFile to fd 1 ...
I0912 21:51:32.478867   59067 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:32.478877   59067 out.go:309] Setting ErrFile to fd 2...
I0912 21:51:32.478882   59067 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:32.479067   59067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
I0912 21:51:32.479672   59067 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:32.479783   59067 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:32.480236   59067 cli_runner.go:164] Run: docker container inspect functional-728577 --format={{.State.Status}}
I0912 21:51:32.498445   59067 ssh_runner.go:195] Run: systemctl --version
I0912 21:51:32.498503   59067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728577
I0912 21:51:32.515953   59067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/functional-728577/id_rsa Username:docker}
I0912 21:51:32.608755   59067 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728577 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "123163446"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-728577
size: "34114467"
- id: 5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "126972880"
- id: b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
- registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "61477686"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests:
- docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153
- docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a
repoTags:
- docker.io/library/nginx:latest
size: "190820093"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "74680215"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629
repoDigests:
- docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70
- docker.io/library/nginx@sha256:7ba6006df2033690d8c64bd8df69e4a1957b78e57b4e32141c78d72a5e0de63d
repoTags:
- docker.io/library/nginx:alpine
size: "44389673"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728577 image ls --format yaml --alsologtostderr:
I0912 21:51:30.903760   58805 out.go:296] Setting OutFile to fd 1 ...
I0912 21:51:30.903896   58805 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:30.903909   58805 out.go:309] Setting ErrFile to fd 2...
I0912 21:51:30.903918   58805 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:30.904125   58805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
I0912 21:51:30.904789   58805 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:30.904892   58805 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:30.905276   58805 cli_runner.go:164] Run: docker container inspect functional-728577 --format={{.State.Status}}
I0912 21:51:30.921633   58805 ssh_runner.go:195] Run: systemctl --version
I0912 21:51:30.921684   58805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728577
I0912 21:51:30.939899   58805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/functional-728577/id_rsa Username:docker}
I0912 21:51:31.032921   58805 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 ssh pgrep buildkitd: exit status 1 (266.069553ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image build -t localhost/my-image:functional-728577 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-728577 image build -t localhost/my-image:functional-728577 testdata/build --alsologtostderr: (2.715565854s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-728577 image build -t localhost/my-image:functional-728577 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 060767da3f5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-728577
--> ec847d4e8c6
Successfully tagged localhost/my-image:functional-728577
ec847d4e8c66f48796d98a8dd9dd28ee1d4eb950b5767e29e04debf6bfea2f4b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-728577 image build -t localhost/my-image:functional-728577 testdata/build --alsologtostderr:
I0912 21:51:31.378292   58939 out.go:296] Setting OutFile to fd 1 ...
I0912 21:51:31.378596   58939 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:31.378607   58939 out.go:309] Setting ErrFile to fd 2...
I0912 21:51:31.378614   58939 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0912 21:51:31.378909   58939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
I0912 21:51:31.379724   58939 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:31.380286   58939 config.go:182] Loaded profile config "functional-728577": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0912 21:51:31.380700   58939 cli_runner.go:164] Run: docker container inspect functional-728577 --format={{.State.Status}}
I0912 21:51:31.398170   58939 ssh_runner.go:195] Run: systemctl --version
I0912 21:51:31.398244   58939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-728577
I0912 21:51:31.413467   58939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/functional-728577/id_rsa Username:docker}
I0912 21:51:31.524929   58939 build_images.go:151] Building image from path: /tmp/build.1339998725.tar
I0912 21:51:31.524987   58939 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0912 21:51:31.534686   58939 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1339998725.tar
I0912 21:51:31.538118   58939 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1339998725.tar: stat -c "%s %y" /var/lib/minikube/build/build.1339998725.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1339998725.tar': No such file or directory
I0912 21:51:31.538147   58939 ssh_runner.go:362] scp /tmp/build.1339998725.tar --> /var/lib/minikube/build/build.1339998725.tar (3072 bytes)
I0912 21:51:31.563733   58939 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1339998725
I0912 21:51:31.626830   58939 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1339998725 -xf /var/lib/minikube/build/build.1339998725.tar
I0912 21:51:31.635573   58939 crio.go:297] Building image: /var/lib/minikube/build/build.1339998725
I0912 21:51:31.635648   58939 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-728577 /var/lib/minikube/build/build.1339998725 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0912 21:51:34.027867   58939 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-728577 /var/lib/minikube/build/build.1339998725 --cgroup-manager=cgroupfs: (2.392190676s)
I0912 21:51:34.027937   58939 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1339998725
I0912 21:51:34.035839   58939 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1339998725.tar
I0912 21:51:34.044289   58939 build_images.go:207] Built localhost/my-image:functional-728577 from /tmp/build.1339998725.tar
I0912 21:51:34.044318   58939 build_images.go:123] succeeded building to: functional-728577
I0912 21:51:34.044324   58939 build_images.go:124] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-728577
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdany-port3720286772/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694555467242369794" to /tmp/TestFunctionalparallelMountCmdany-port3720286772/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694555467242369794" to /tmp/TestFunctionalparallelMountCmdany-port3720286772/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694555467242369794" to /tmp/TestFunctionalparallelMountCmdany-port3720286772/001/test-1694555467242369794
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.852543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 12 21:51 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 12 21:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 12 21:51 test-1694555467242369794
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh cat /mount-9p/test-1694555467242369794
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-728577 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [10272866-f5ca-4df4-b6bb-80112f435441] Pending
helpers_test.go:344: "busybox-mount" [10272866-f5ca-4df4-b6bb-80112f435441] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0912 21:51:10.762590   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [10272866-f5ca-4df4-b6bb-80112f435441] Running
helpers_test.go:344: "busybox-mount" [10272866-f5ca-4df4-b6bb-80112f435441] Running: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [10272866-f5ca-4df4-b6bb-80112f435441] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.01029283s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-728577 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdany-port3720286772/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image load --daemon gcr.io/google-containers/addon-resizer:functional-728577 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-728577 image load --daemon gcr.io/google-containers/addon-resizer:functional-728577 --alsologtostderr: (3.871092558s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image load --daemon gcr.io/google-containers/addon-resizer:functional-728577 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-728577 image load --daemon gcr.io/google-containers/addon-resizer:functional-728577 --alsologtostderr: (2.729750019s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-728577 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.049683539s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-728577
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image load --daemon gcr.io/google-containers/addon-resizer:functional-728577 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-728577 image load --daemon gcr.io/google-containers/addon-resizer:functional-728577 --alsologtostderr: (5.431822472s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.42.51 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-728577 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdspecific-port2599362969/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (317.243405ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdspecific-port2599362969/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 ssh "sudo umount -f /mount-9p": exit status 1 (317.978836ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-728577 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdspecific-port2599362969/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495733138/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495733138/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495733138/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T" /mount1: exit status 1 (388.651522ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-728577 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495733138/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495733138/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-728577 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3495733138/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-728577 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-728577 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-rp8ww" [116fa365-ba10-4051-92ac-f92047d7088c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-rp8ww" [116fa365-ba10-4051-92ac-f92047d7088c] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.009991458s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image save gcr.io/google-containers/addon-resizer:functional-728577 /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image rm gcr.io/google-containers/addon-resizer:functional-728577 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image load /home/jenkins/workspace/Docker_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-728577
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 image save --daemon gcr.io/google-containers/addon-resizer:functional-728577 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-728577
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "306.292293ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "42.670945ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "261.795607ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "39.448869ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 service list -o json
functional_test.go:1493: Took "475.503128ms" to run "out/minikube-linux-amd64 -p functional-728577 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.49.2:30225
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 update-context --alsologtostderr -v=2
2023/09/12 21:51:34 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-728577 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.49.2:30225
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-728577
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-728577
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-728577
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (67.89s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-704515 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-704515 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (1m7.89441729s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (67.89s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.75s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-704515 addons enable ingress --alsologtostderr -v=5
E0912 21:53:13.643464   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-704515 addons enable ingress --alsologtostderr -v=5: (10.745678092s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.75s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-704515 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-623455 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E0912 21:56:26.964284   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:56:47.445480   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-623455 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (40.099733627s)
--- PASS: TestJSONOutput/start/Command (40.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-623455 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-623455 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-623455 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-623455 --output=json --user=testUser: (5.800389244s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-403315 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-403315 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (54.798188ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d7f2c8be-af24-48c4-b15d-f25bf599d4c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-403315] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c321cd4-1441-4af9-a956-610fbfc8310b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17194"}}
	{"specversion":"1.0","id":"43d4a3e8-3f43-43bb-a57f-9a8f2211e6d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b4b6b718-555c-4696-9caa-0da3736cab30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig"}}
	{"specversion":"1.0","id":"8b5faa9f-9974-4668-a3aa-a79c1fe97550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube"}}
	{"specversion":"1.0","id":"1eb377f8-b7aa-4276-9e28-d95bb60d947c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2fa8d7f9-10a9-4e0a-8138-7fb032ee944a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1d8a5539-0606-4ea3-8457-4a3f30c1afc7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-403315" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-403315
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (31.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-598263 --network=
E0912 21:57:28.406909   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-598263 --network=: (29.587544981s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-598263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-598263
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-598263: (2.015763652s)
--- PASS: TestKicCustomNetwork/create_custom_network (31.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.39s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-503843 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-503843 --network=bridge: (24.514407537s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-503843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-503843
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-503843: (1.854747893s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.39s)

                                                
                                    
x
+
TestKicExistingNetwork (26.97s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-791659 --network=existing-network
E0912 21:58:16.200279   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:16.205594   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:16.215870   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:16.236186   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:16.276511   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:16.356880   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:16.517405   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:16.837790   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:17.478801   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:18.759503   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:21.320754   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 21:58:26.441145   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-791659 --network=existing-network: (24.956767488s)
helpers_test.go:175: Cleaning up "existing-network-791659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-791659
E0912 21:58:36.681637   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-791659: (1.885814254s)
--- PASS: TestKicExistingNetwork (26.97s)

                                                
                                    
x
+
TestKicCustomSubnet (24.17s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-463813 --subnet=192.168.60.0/24
E0912 21:58:50.328084   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 21:58:57.162301   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-463813 --subnet=192.168.60.0/24: (22.121303107s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-463813 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-463813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-463813
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-463813: (2.0265777s)
--- PASS: TestKicCustomSubnet (24.17s)

                                                
                                    
x
+
TestKicStaticIP (24.9s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-775046 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-775046 --static-ip=192.168.200.200: (22.755276557s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-775046 ip
helpers_test.go:175: Cleaning up "static-ip-775046" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-775046
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-775046: (2.030816572s)
--- PASS: TestKicStaticIP (24.90s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (49.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-032844 --driver=docker  --container-runtime=crio
E0912 21:59:38.123117   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-032844 --driver=docker  --container-runtime=crio: (22.028408687s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-035029 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-035029 --driver=docker  --container-runtime=crio: (22.151459163s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-032844
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-035029
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-035029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-035029
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-035029: (1.80817097s)
helpers_test.go:175: Cleaning up "first-032844" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-032844
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-032844: (2.153736302s)
--- PASS: TestMinikubeProfile (49.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-883342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-883342 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.981795911s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-883342 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-899248 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0912 22:00:29.801637   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-899248 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.012757328s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-899248 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-883342 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-883342 --alsologtostderr -v=5: (1.586973544s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-899248 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-899248
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-899248: (1.174610394s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (6.99s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-899248
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-899248: (5.987824404s)
--- PASS: TestMountStart/serial/RestartStopped (6.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-899248 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (67.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-947523 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0912 22:01:00.044230   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 22:01:06.482427   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 22:01:34.169292   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-947523 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m7.554899777s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (67.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-947523 -- rollout status deployment/busybox: (1.982409768s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-2lnnj -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-4qwb4 -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-2lnnj -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-4qwb4 -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-2lnnj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-947523 -- exec busybox-5bc68d56bd-4qwb4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.48s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-947523 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-947523 -v 3 --alsologtostderr: (19.419025316s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.99s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp testdata/cp-test.txt multinode-947523:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp multinode-947523:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2096745224/001/cp-test_multinode-947523.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp multinode-947523:/home/docker/cp-test.txt multinode-947523-m02:/home/docker/cp-test_multinode-947523_multinode-947523-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m02 "sudo cat /home/docker/cp-test_multinode-947523_multinode-947523-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp multinode-947523:/home/docker/cp-test.txt multinode-947523-m03:/home/docker/cp-test_multinode-947523_multinode-947523-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m03 "sudo cat /home/docker/cp-test_multinode-947523_multinode-947523-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp testdata/cp-test.txt multinode-947523-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp multinode-947523-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2096745224/001/cp-test_multinode-947523-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp multinode-947523-m02:/home/docker/cp-test.txt multinode-947523:/home/docker/cp-test_multinode-947523-m02_multinode-947523.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523 "sudo cat /home/docker/cp-test_multinode-947523-m02_multinode-947523.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp multinode-947523-m02:/home/docker/cp-test.txt multinode-947523-m03:/home/docker/cp-test_multinode-947523-m02_multinode-947523-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m03 "sudo cat /home/docker/cp-test_multinode-947523-m02_multinode-947523-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp testdata/cp-test.txt multinode-947523-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp multinode-947523-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2096745224/001/cp-test_multinode-947523-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp multinode-947523-m03:/home/docker/cp-test.txt multinode-947523:/home/docker/cp-test_multinode-947523-m03_multinode-947523.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523 "sudo cat /home/docker/cp-test_multinode-947523-m03_multinode-947523.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 cp multinode-947523-m03:/home/docker/cp-test.txt multinode-947523-m02:/home/docker/cp-test_multinode-947523-m03_multinode-947523-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 ssh -n multinode-947523-m02 "sudo cat /home/docker/cp-test_multinode-947523-m03_multinode-947523-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.72s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-947523 node stop m03: (1.191422345s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-947523 status: exit status 7 (439.106342ms)

                                                
                                                
-- stdout --
	multinode-947523
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-947523-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-947523-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-947523 status --alsologtostderr: exit status 7 (443.376771ms)

                                                
                                                
-- stdout --
	multinode-947523
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-947523-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-947523-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:02:30.050446  119342 out.go:296] Setting OutFile to fd 1 ...
	I0912 22:02:30.050545  119342 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:30.050554  119342 out.go:309] Setting ErrFile to fd 2...
	I0912 22:02:30.050558  119342 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:02:30.050759  119342 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 22:02:30.050916  119342 out.go:303] Setting JSON to false
	I0912 22:02:30.050949  119342 mustload.go:65] Loading cluster: multinode-947523
	I0912 22:02:30.051064  119342 notify.go:220] Checking for updates...
	I0912 22:02:30.051460  119342 config.go:182] Loaded profile config "multinode-947523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:02:30.051479  119342 status.go:255] checking status of multinode-947523 ...
	I0912 22:02:30.051982  119342 cli_runner.go:164] Run: docker container inspect multinode-947523 --format={{.State.Status}}
	I0912 22:02:30.070138  119342 status.go:330] multinode-947523 host status = "Running" (err=<nil>)
	I0912 22:02:30.070165  119342 host.go:66] Checking if "multinode-947523" exists ...
	I0912 22:02:30.070403  119342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-947523
	I0912 22:02:30.086002  119342 host.go:66] Checking if "multinode-947523" exists ...
	I0912 22:02:30.086254  119342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:30.086309  119342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523
	I0912 22:02:30.102086  119342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32847 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523/id_rsa Username:docker}
	I0912 22:02:30.193321  119342 ssh_runner.go:195] Run: systemctl --version
	I0912 22:02:30.197127  119342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:30.206788  119342 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:02:30.256581  119342 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:56 SystemTime:2023-09-12 22:02:30.248242038 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archi
tecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil>
ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:02:30.257145  119342 kubeconfig.go:92] found "multinode-947523" server: "https://192.168.58.2:8443"
	I0912 22:02:30.257167  119342 api_server.go:166] Checking apiserver status ...
	I0912 22:02:30.257208  119342 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0912 22:02:30.267022  119342 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1427/cgroup
	I0912 22:02:30.275224  119342 api_server.go:182] apiserver freezer: "12:freezer:/docker/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/crio/crio-736dccdd44581f1e0181e5f68695d2a1c2e64713c3a67a5e57f1f67405629d32"
	I0912 22:02:30.275281  119342 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1fdd02d2728f05042ad7b89b5c209062d58952ad9d268f4afc4e90603855d281/crio/crio-736dccdd44581f1e0181e5f68695d2a1c2e64713c3a67a5e57f1f67405629d32/freezer.state
	I0912 22:02:30.282756  119342 api_server.go:204] freezer state: "THAWED"
	I0912 22:02:30.282784  119342 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0912 22:02:30.286725  119342 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0912 22:02:30.286744  119342 status.go:421] multinode-947523 apiserver status = Running (err=<nil>)
	I0912 22:02:30.286751  119342 status.go:257] multinode-947523 status: &{Name:multinode-947523 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:30.286768  119342 status.go:255] checking status of multinode-947523-m02 ...
	I0912 22:02:30.286999  119342 cli_runner.go:164] Run: docker container inspect multinode-947523-m02 --format={{.State.Status}}
	I0912 22:02:30.303264  119342 status.go:330] multinode-947523-m02 host status = "Running" (err=<nil>)
	I0912 22:02:30.303288  119342 host.go:66] Checking if "multinode-947523-m02" exists ...
	I0912 22:02:30.303554  119342 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-947523-m02
	I0912 22:02:30.319082  119342 host.go:66] Checking if "multinode-947523-m02" exists ...
	I0912 22:02:30.319334  119342 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0912 22:02:30.319378  119342 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-947523-m02
	I0912 22:02:30.336330  119342 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32852 SSHKeyPath:/home/jenkins/minikube-integration/17194-15878/.minikube/machines/multinode-947523-m02/id_rsa Username:docker}
	I0912 22:02:30.429144  119342 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0912 22:02:30.438918  119342 status.go:257] multinode-947523-m02 status: &{Name:multinode-947523-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:02:30.438952  119342 status.go:255] checking status of multinode-947523-m03 ...
	I0912 22:02:30.439236  119342 cli_runner.go:164] Run: docker container inspect multinode-947523-m03 --format={{.State.Status}}
	I0912 22:02:30.455365  119342 status.go:330] multinode-947523-m03 host status = "Stopped" (err=<nil>)
	I0912 22:02:30.455391  119342 status.go:343] host is not running, skipping remaining checks
	I0912 22:02:30.455399  119342 status.go:257] multinode-947523-m03 status: &{Name:multinode-947523-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:244: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-947523 node start m03 --alsologtostderr: (10.36041842s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (114.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-947523
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-947523
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-947523: (24.740296472s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-947523 --wait=true -v=8 --alsologtostderr
E0912 22:03:16.199708   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 22:03:43.884954   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-947523 --wait=true -v=8 --alsologtostderr: (1m29.679302925s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-947523
--- PASS: TestMultiNode/serial/RestartKeepsNodes (114.50s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-947523 node delete m03: (4.051847022s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status --alsologtostderr
multinode_test.go:414: (dbg) Run:  docker volume ls
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-947523 stop: (23.633901807s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-947523 status: exit status 7 (76.608015ms)

                                                
                                                
-- stdout --
	multinode-947523
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-947523-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-947523 status --alsologtostderr: exit status 7 (76.056303ms)

                                                
                                                
-- stdout --
	multinode-947523
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-947523-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:05:04.343297  129434 out.go:296] Setting OutFile to fd 1 ...
	I0912 22:05:04.343399  129434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:05:04.343404  129434 out.go:309] Setting ErrFile to fd 2...
	I0912 22:05:04.343409  129434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:05:04.343606  129434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 22:05:04.343771  129434 out.go:303] Setting JSON to false
	I0912 22:05:04.343802  129434 mustload.go:65] Loading cluster: multinode-947523
	I0912 22:05:04.343874  129434 notify.go:220] Checking for updates...
	I0912 22:05:04.344138  129434 config.go:182] Loaded profile config "multinode-947523": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:05:04.344151  129434 status.go:255] checking status of multinode-947523 ...
	I0912 22:05:04.344535  129434 cli_runner.go:164] Run: docker container inspect multinode-947523 --format={{.State.Status}}
	I0912 22:05:04.363809  129434 status.go:330] multinode-947523 host status = "Stopped" (err=<nil>)
	I0912 22:05:04.363832  129434 status.go:343] host is not running, skipping remaining checks
	I0912 22:05:04.363840  129434 status.go:257] multinode-947523 status: &{Name:multinode-947523 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0912 22:05:04.363877  129434 status.go:255] checking status of multinode-947523-m02 ...
	I0912 22:05:04.364167  129434 cli_runner.go:164] Run: docker container inspect multinode-947523-m02 --format={{.State.Status}}
	I0912 22:05:04.379579  129434 status.go:330] multinode-947523-m02 host status = "Stopped" (err=<nil>)
	I0912 22:05:04.379598  129434 status.go:343] host is not running, skipping remaining checks
	I0912 22:05:04.379603  129434 status.go:257] multinode-947523-m02 status: &{Name:multinode-947523-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (77.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:344: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-947523 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E0912 22:05:29.800961   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 22:06:06.482554   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-947523 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m16.4622398s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-947523 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (77.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-947523
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-947523-m02 --driver=docker  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-947523-m02 --driver=docker  --container-runtime=crio: exit status 14 (58.690884ms)

                                                
                                                
-- stdout --
	* [multinode-947523-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17194
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-947523-m02' is duplicated with machine name 'multinode-947523-m02' in profile 'multinode-947523'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-947523-m03 --driver=docker  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-947523-m03 --driver=docker  --container-runtime=crio: (23.373953161s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-947523
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-947523: exit status 80 (258.084446ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-947523
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-947523-m03 already exists in multinode-947523-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-947523-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-947523-m03: (1.849540955s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.58s)

                                                
                                    
x
+
TestPreload (132.28s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-717663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0912 22:06:52.844303   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-717663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m3.159432578s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-717663 image pull gcr.io/k8s-minikube/busybox
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-717663
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-717663: (5.696462438s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-717663 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0912 22:08:16.199120   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-717663 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (1m0.075696139s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-717663 image list
helpers_test.go:175: Cleaning up "test-preload-717663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-717663
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-717663: (2.230273397s)
--- PASS: TestPreload (132.28s)

                                                
                                    
x
+
TestScheduledStopUnix (96.45s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-554929 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-554929 --memory=2048 --driver=docker  --container-runtime=crio: (21.064017037s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-554929 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-554929 -n scheduled-stop-554929
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-554929 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-554929 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-554929 -n scheduled-stop-554929
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-554929
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-554929 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0912 22:10:29.800960   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-554929
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-554929: exit status 7 (56.423538ms)

                                                
                                                
-- stdout --
	scheduled-stop-554929
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-554929 -n scheduled-stop-554929
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-554929 -n scheduled-stop-554929: exit status 7 (57.331331ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-554929" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-554929
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-554929: (4.158456427s)
--- PASS: TestScheduledStopUnix (96.45s)

                                                
                                    
x
+
TestInsufficientStorage (13.21s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-969138 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-969138 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.901923614s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2a685092-d7cc-493d-8833-408808e268dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-969138] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"165f2aa4-9a83-418b-9c39-2f6a87e4c7e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17194"}}
	{"specversion":"1.0","id":"a5bfebda-e72c-4925-888b-2c8b1fdfdd9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e5f9b088-5f03-43d4-9051-ea363f8c22ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig"}}
	{"specversion":"1.0","id":"c53bc0c3-bc59-4a13-ba33-57934a535e27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube"}}
	{"specversion":"1.0","id":"f2cad063-0131-43df-9b8d-228b0887735d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c277dd1f-46e6-4f13-8772-ae9e020838b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"64a97503-19c0-4654-aca4-cdfb7526a253","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"163575e9-fc93-4554-aaf1-09ee73b24392","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7dc38fe0-9436-41ce-aaea-6d7d3dc23bc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"71feb16e-36f7-4e97-8cbf-ac516f3b2d89","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d8bef9d5-588a-4afa-ab5c-2ea1fd851070","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-969138 in cluster insufficient-storage-969138","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c06d54c-f191-425b-b49c-986e5025c490","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"e74e243e-adb4-4856-ad6a-d0b17d30a09c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"112e0f5b-b879-40a1-8bf1-159e4072206c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-969138 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-969138 --output=json --layout=cluster: exit status 7 (250.992466ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-969138","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-969138","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:10:52.445439  151049 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-969138" does not appear in /home/jenkins/minikube-integration/17194-15878/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-969138 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-969138 --output=json --layout=cluster: exit status 7 (249.61232ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-969138","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-969138","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0912 22:10:52.696234  151151 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-969138" does not appear in /home/jenkins/minikube-integration/17194-15878/kubeconfig
	E0912 22:10:52.705467  151151 status.go:559] unable to read event log: stat: stat /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/insufficient-storage-969138/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-969138" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-969138
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-969138: (1.807224136s)
--- PASS: TestInsufficientStorage (13.21s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533888 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0912 22:12:29.529809   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-533888 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (54.262755633s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-533888
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-533888: (1.328455787s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-533888 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-533888 status --format={{.Host}}: exit status 7 (69.862005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533888 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0912 22:13:16.199081   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-533888 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m32.317022348s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-533888 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533888 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-533888 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker  --container-runtime=crio: exit status 106 (93.052881ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-533888] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17194
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-533888
	    minikube start -p kubernetes-upgrade-533888 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5338882 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-533888 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-533888 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-533888 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.235702875s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-533888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-533888
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-533888: (2.170016482s)
--- PASS: TestKubernetesUpgrade (354.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (152.79s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:322: (dbg) Run:  /tmp/minikube-v1.9.0.3611010923.exe start -p missing-upgrade-397853 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:322: (dbg) Done: /tmp/minikube-v1.9.0.3611010923.exe start -p missing-upgrade-397853 --memory=2200 --driver=docker  --container-runtime=crio: (1m14.514216836s)
version_upgrade_test.go:331: (dbg) Run:  docker stop missing-upgrade-397853
version_upgrade_test.go:331: (dbg) Done: docker stop missing-upgrade-397853: (10.292440912s)
version_upgrade_test.go:336: (dbg) Run:  docker rm missing-upgrade-397853
version_upgrade_test.go:342: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-397853 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:342: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-397853 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.673640948s)
helpers_test.go:175: Cleaning up "missing-upgrade-397853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-397853
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-397853: (3.933543854s)
--- PASS: TestMissingContainerUpgrade (152.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806545 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-806545 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (74.333339ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-806545] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17194
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806545 --driver=docker  --container-runtime=crio
E0912 22:11:06.482052   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806545 --driver=docker  --container-runtime=crio: (35.73064353s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-806545 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (36.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806545 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806545 --no-kubernetes --driver=docker  --container-runtime=crio: (6.017107043s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-806545 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-806545 status -o json: exit status 2 (317.961572ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-806545","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-806545
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-806545: (2.132049995s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806545 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806545 --no-kubernetes --driver=docker  --container-runtime=crio: (6.70861756s)
--- PASS: TestNoKubernetes/serial/Start (6.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-806545 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-806545 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.172594ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-806545
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-806545: (1.206057289s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-806545 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-806545 --driver=docker  --container-runtime=crio: (8.159594226s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-806545 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-806545 "sudo systemctl is-active --quiet service kubelet": exit status 1 (306.2171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-511142 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-511142 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (200.027721ms)

                                                
                                                
-- stdout --
	* [false-511142] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17194
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0912 22:12:03.229235  174181 out.go:296] Setting OutFile to fd 1 ...
	I0912 22:12:03.229386  174181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:12:03.229399  174181 out.go:309] Setting ErrFile to fd 2...
	I0912 22:12:03.229406  174181 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0912 22:12:03.229705  174181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17194-15878/.minikube/bin
	I0912 22:12:03.230451  174181 out.go:303] Setting JSON to false
	I0912 22:12:03.231883  174181 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6871,"bootTime":1694549852,"procs":380,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0912 22:12:03.231963  174181 start.go:138] virtualization: kvm guest
	I0912 22:12:03.234230  174181 out.go:177] * [false-511142] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0912 22:12:03.236459  174181 out.go:177]   - MINIKUBE_LOCATION=17194
	I0912 22:12:03.236423  174181 notify.go:220] Checking for updates...
	I0912 22:12:03.237998  174181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0912 22:12:03.239895  174181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17194-15878/kubeconfig
	I0912 22:12:03.241316  174181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17194-15878/.minikube
	I0912 22:12:03.244202  174181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0912 22:12:03.245820  174181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0912 22:12:03.247772  174181 config.go:182] Loaded profile config "cert-expiration-347810": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0912 22:12:03.247907  174181 config.go:182] Loaded profile config "stopped-upgrade-950672": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.18.0
	I0912 22:12:03.248018  174181 driver.go:373] Setting default libvirt URI to qemu:///system
	I0912 22:12:03.281739  174181 docker.go:121] docker version: linux-24.0.6:Docker Engine - Community
	I0912 22:12:03.281825  174181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0912 22:12:03.355782  174181 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:65 SystemTime:2023-09-12 22:12:03.3444979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1041-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Archite
cture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33648058368 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:24.0.6 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8165feabfdfe38c65b599c4993d227328c231fca Expected:8165feabfdfe38c65b599c4993d227328c231fca} RuncCommit:{ID:v1.1.8-0-g82f18fe Expected:v1.1.8-0-g82f18fe} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> Se
rverErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.11.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.21.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0912 22:12:03.355927  174181 docker.go:294] overlay module found
	I0912 22:12:03.357979  174181 out.go:177] * Using the docker driver based on user configuration
	I0912 22:12:03.359361  174181 start.go:298] selected driver: docker
	I0912 22:12:03.359380  174181 start.go:902] validating driver "docker" against <nil>
	I0912 22:12:03.359399  174181 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0912 22:12:03.361787  174181 out.go:177] 
	W0912 22:12:03.363136  174181 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0912 22:12:03.364529  174181 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-511142 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-511142" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 22:11:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-347810
contexts:
- context:
cluster: cert-expiration-347810
extensions:
- extension:
last-update: Tue, 12 Sep 2023 22:11:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-347810
name: cert-expiration-347810
current-context: cert-expiration-347810
kind: Config
preferences: {}
users:
- name: cert-expiration-347810
user:
client-certificate: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/cert-expiration-347810/client.crt
client-key: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/cert-expiration-347810/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-511142

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-511142"

                                                
                                                
----------------------- debugLogs end: false-511142 [took: 3.344266459s] --------------------------------
helpers_test.go:175: Cleaning up "false-511142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-511142
--- PASS: TestNetworkPlugins/group/false (3.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-950672
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.47s)

                                                
                                    
x
+
TestPause/serial/Start (41.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-959901 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-959901 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (41.482598817s)
--- PASS: TestPause/serial/Start (41.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E0912 22:14:39.245258   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.05480553s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.609388219s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-511142 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-511142 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sqzv8" [1e9a4ded-5f50-40d3-9406-63b16103e034] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sqzv8" [1e9a4ded-5f50-40d3-9406-63b16103e034] Running
E0912 22:15:29.801261   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.009737466s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-511142 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m0.857330848s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rm5qw" [18b6f27d-5f3d-4d34-9686-d24bb3d27c25] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.018418244s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-511142 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-511142 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-pqzq2" [25e88a96-3a6e-4f96-891e-743b5af6b36c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-pqzq2" [25e88a96-3a6e-4f96-891e-743b5af6b36c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00966097s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m0.596862565s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-511142 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (39.403813166s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-x7jdj" [3b97765a-ac63-425c-9ad8-ff13e29b2cc0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021470309s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-511142 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-511142 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gvqnd" [4263f66a-39f7-4ba8-956b-713c4740618e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-gvqnd" [4263f66a-39f7-4ba8-956b-713c4740618e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.008891339s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-511142 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-511142 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-511142 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6gfcq" [a05bbf59-79ac-4749-999e-1f9464091400] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6gfcq" [a05bbf59-79ac-4749-999e-1f9464091400] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.010584182s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-511142 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-511142 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-58lwv" [6c08a4bd-c94e-404e-bd60-d151aa352909] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-58lwv" [6c08a4bd-c94e-404e-bd60-d151aa352909] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.009407909s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-511142 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (59.139981782s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (33.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-511142 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-511142 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.163131166s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-511142 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context enable-default-cni-511142 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.166414719s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-511142 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (33.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-511142 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.044526495s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (125.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-227070 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0912 22:18:16.199083   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-227070 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (2m5.055639908s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (125.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-511142 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-511142 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xpq8s" [67e23cdd-9a81-4bf7-b642-dc41d2132cd4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xpq8s" [67e23cdd-9a81-4bf7-b642-dc41d2132cd4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.010144482s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-401928 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-401928 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (54.517528457s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-k6q5t" [c45e6dde-dcd7-48ce-b0af-9ec43fca4d1b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.017679995s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-511142 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-511142 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lv7jr" [8345a0d9-b1a3-468a-8ac9-4739fa760099] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lv7jr" [8345a0d9-b1a3-468a-8ac9-4739fa760099] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.009604341s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (32.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-511142 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-511142 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.19936688s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-511142 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-511142 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.186414514s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-511142 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (32.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-511142 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.96s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-979539 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-979539 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (43.958185886s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-511142 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-401928 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b6d56a4a-761a-4ec9-b675-0bc03d4ede99] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b6d56a4a-761a-4ec9-b675-0bc03d4ede99] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.018425649s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-401928 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-505842 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-505842 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (40.782311039s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-401928 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-401928 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.119718583s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-401928 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-401928 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-401928 --alsologtostderr -v=3: (12.277957993s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-401928 -n no-preload-401928
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-401928 -n no-preload-401928: exit status 7 (60.818026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-401928 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (334.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-401928 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-401928 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m33.866085473s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-401928 -n no-preload-401928
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (334.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-979539 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [44e5ba5f-a2df-4a10-b457-0caa0f216a07] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [44e5ba5f-a2df-4a10-b457-0caa0f216a07] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.016762638s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-979539 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-979539 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-979539 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-979539 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-979539 --alsologtostderr -v=3: (12.179724834s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-505842 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1f0e8297-0207-4918-b0d7-3872cc66daa1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1f0e8297-0207-4918-b0d7-3872cc66daa1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.014396761s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-505842 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-227070 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [54dc9add-dc39-4ba3-a329-2d3562f908f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [54dc9add-dc39-4ba3-a329-2d3562f908f9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.013219601s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-227070 exec busybox -- /bin/sh -c "ulimit -n"
E0912 22:20:19.671391   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:20:19.676570   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:20:19.686831   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:20:19.707084   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:20:19.747342   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-979539 -n embed-certs-979539
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-979539 -n embed-certs-979539: exit status 7 (73.402285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-979539 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (336.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-979539 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-979539 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m35.924827518s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-979539 -n embed-certs-979539
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (336.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-505842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-505842 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-505842 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-505842 --alsologtostderr -v=3: (11.936223522s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-227070 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0912 22:20:19.827925   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:20:19.988318   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:20:20.308457   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-227070 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-227070 --alsologtostderr -v=3
E0912 22:20:20.948910   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:20:22.230094   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:20:24.790806   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-227070 --alsologtostderr -v=3: (11.906199916s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505842 -n default-k8s-diff-port-505842
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505842 -n default-k8s-diff-port-505842: exit status 7 (60.238945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-505842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (343.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-505842 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0912 22:20:29.800762   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 22:20:29.910961   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-505842 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (5m42.698917137s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-505842 -n default-k8s-diff-port-505842
E0912 22:26:11.927878   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (343.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-227070 -n old-k8s-version-227070
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-227070 -n old-k8s-version-227070: exit status 7 (68.683461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-227070 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (417.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-227070 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0
E0912 22:20:40.151846   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:21:00.632470   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:21:01.912144   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:01.917380   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:01.927666   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:01.947936   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:01.988210   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:02.068813   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:02.229012   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:02.549324   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:03.189811   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:04.470631   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:06.481874   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
E0912 22:21:07.031457   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:12.152003   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:22.392178   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:41.593636   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:21:42.872422   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:21:52.105767   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:52.111037   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:52.121290   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:52.141556   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:52.181835   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:52.262165   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:52.422931   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:52.743038   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:53.384180   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:54.665278   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:21:57.226300   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:22:02.347298   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:22:12.588131   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:22:14.863295   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:14.868514   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:14.878761   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:14.899001   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:14.939283   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:15.019556   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:15.180163   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:15.501309   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:16.141823   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:17.422374   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:19.119014   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:19.124267   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:19.134542   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:19.154855   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:19.195120   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:19.275425   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:19.436234   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:19.757093   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:19.982588   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:20.397883   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:21.678024   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:23.833318   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:22:24.238422   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:25.102755   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:29.359369   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:33.068736   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:22:35.343281   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:22:39.600256   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:22:55.824376   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:23:00.081261   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:23:03.514655   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
E0912 22:23:14.029638   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:23:16.199430   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/ingress-addon-legacy-704515/client.crt: no such file or directory
E0912 22:23:24.360511   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:24.365749   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:24.376004   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:24.396313   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:24.436579   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:24.516913   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:24.677307   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:24.997935   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:25.638553   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:26.919097   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:28.085012   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:28.090265   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:28.100514   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:28.120768   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:28.161111   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:28.241452   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:28.401784   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:28.722040   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:29.362975   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:29.480094   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:30.643445   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:32.845084   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
E0912 22:23:33.203588   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:34.600733   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:36.784859   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:23:38.324699   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:23:41.042000   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
E0912 22:23:44.841787   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:23:45.753812   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
E0912 22:23:48.565308   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:24:05.322944   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:24:09.046154   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:24:35.950328   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/calico-511142/client.crt: no such file or directory
E0912 22:24:46.283157   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
E0912 22:24:50.006848   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/flannel-511142/client.crt: no such file or directory
E0912 22:24:58.705985   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
E0912 22:25:02.962367   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/enable-default-cni-511142/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-227070 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.16.0: (6m56.959595603s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-227070 -n old-k8s-version-227070
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (417.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-n4zfl" [3898126f-0eb7-4692-a827-39c91f1aa48a] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0912 22:25:19.671487   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-n4zfl" [3898126f-0eb7-4692-a827-39c91f1aa48a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 12.017578619s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (12.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-n4zfl" [3898126f-0eb7-4692-a827-39c91f1aa48a] Running
E0912 22:25:29.801551   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/addons-348433/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009781421s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-401928 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-401928 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-401928 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-401928 -n no-preload-401928
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-401928 -n no-preload-401928: exit status 2 (317.312211ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-401928 -n no-preload-401928
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-401928 -n no-preload-401928: exit status 2 (327.225497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-401928 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-401928 -n no-preload-401928
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-401928 -n no-preload-401928
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-616740 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
E0912 22:25:47.355349   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/auto-511142/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-616740 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (39.001799027s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-24clk" [6e1c3160-a0e4-413c-830a-cbda7c352080] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-24clk" [6e1c3160-a0e4-413c-830a-cbda7c352080] Running
E0912 22:26:01.912407   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/kindnet-511142/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.020504345s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-24clk" [6e1c3160-a0e4-413c-830a-cbda7c352080] Running
E0912 22:26:06.482200   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/functional-728577/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01026359s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-979539 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-979539 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-979539 --alsologtostderr -v=1
E0912 22:26:08.203373   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/bridge-511142/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-979539 -n embed-certs-979539
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-979539 -n embed-certs-979539: exit status 2 (314.325726ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-979539 -n embed-certs-979539
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-979539 -n embed-certs-979539: exit status 2 (309.816091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-979539 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-979539 -n embed-certs-979539
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-979539 -n embed-certs-979539
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dcfxz" [a9b0d559-78e5-4677-88ba-cd29b2d29648] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dcfxz" [a9b0d559-78e5-4677-88ba-cd29b2d29648] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.018318697s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-616740 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-616740 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.25281757s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-616740 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-616740 --alsologtostderr -v=3: (12.212621892s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-dcfxz" [a9b0d559-78e5-4677-88ba-cd29b2d29648] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00892988s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-505842 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-505842 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-505842 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505842 -n default-k8s-diff-port-505842
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505842 -n default-k8s-diff-port-505842: exit status 2 (310.839607ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-505842 -n default-k8s-diff-port-505842
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-505842 -n default-k8s-diff-port-505842: exit status 2 (312.777142ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-505842 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-505842 -n default-k8s-diff-port-505842
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-505842 -n default-k8s-diff-port-505842
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-616740 -n newest-cni-616740
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-616740 -n newest-cni-616740: exit status 7 (59.083241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-616740 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (26.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-616740 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-616740 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.1: (26.425489947s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-616740 -n newest-cni-616740
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (26.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-616740 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-616740 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-616740 -n newest-cni-616740
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-616740 -n newest-cni-616740: exit status 2 (278.81065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-616740 -n newest-cni-616740
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-616740 -n newest-cni-616740: exit status 2 (277.047148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-616740 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-616740 -n newest-cni-616740
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-616740 -n newest-cni-616740
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6z6rs" [926a4a81-a6da-4982-9231-d9f5a302aab3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013877456s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-6z6rs" [926a4a81-a6da-4982-9231-d9f5a302aab3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007660836s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-227070 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-227070 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-227070 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-227070 -n old-k8s-version-227070
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-227070 -n old-k8s-version-227070: exit status 2 (269.739098ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-227070 -n old-k8s-version-227070
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-227070 -n old-k8s-version-227070: exit status 2 (269.833382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-227070 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-227070 -n old-k8s-version-227070
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-227070 -n old-k8s-version-227070
E0912 22:27:42.547042   22698 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/custom-flannel-511142/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.50s)

                                                
                                    

Test skip (24/298)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-511142 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-511142" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 22:11:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-347810
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 22:11:58 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.85.2:8555
name: cert-options-272741
contexts:
- context:
cluster: cert-expiration-347810
extensions:
- extension:
last-update: Tue, 12 Sep 2023 22:11:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-347810
name: cert-expiration-347810
- context:
cluster: cert-options-272741
extensions:
- extension:
last-update: Tue, 12 Sep 2023 22:11:58 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-options-272741
name: cert-options-272741
current-context: cert-expiration-347810
kind: Config
preferences: {}
users:
- name: cert-expiration-347810
user:
client-certificate: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/cert-expiration-347810/client.crt
client-key: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/cert-expiration-347810/client.key
- name: cert-options-272741
user:
client-certificate: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/cert-options-272741/client.crt
client-key: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/cert-options-272741/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-511142

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-511142"

                                                
                                                
----------------------- debugLogs end: kubenet-511142 [took: 3.955233291s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-511142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-511142
--- SKIP: TestNetworkPlugins/group/kubenet (4.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-511142 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-511142" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17194-15878/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 12 Sep 2023 22:11:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: cluster_info
server: https://192.168.67.2:8443
name: cert-expiration-347810
contexts:
- context:
cluster: cert-expiration-347810
extensions:
- extension:
last-update: Tue, 12 Sep 2023 22:11:59 UTC
provider: minikube.sigs.k8s.io
version: v1.31.2
name: context_info
namespace: default
user: cert-expiration-347810
name: cert-expiration-347810
current-context: cert-expiration-347810
kind: Config
preferences: {}
users:
- name: cert-expiration-347810
user:
client-certificate: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/cert-expiration-347810/client.crt
client-key: /home/jenkins/minikube-integration/17194-15878/.minikube/profiles/cert-expiration-347810/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-511142

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-511142" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-511142"

                                                
                                                
----------------------- debugLogs end: cilium-511142 [took: 4.718186657s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-511142" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-511142
--- SKIP: TestNetworkPlugins/group/cilium (4.89s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-695156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-695156
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard